Tuesday, December 16, 2008

Conclusions

Conclusions:

From the outset I wanted to answer two very specific statistical questions:

1. When it comes to American foreign policy, is there a statistical difference between the policy preferences of informed and uninformed voters?

H1: µ≠µ

H2: µ=µ

2. Is there a correlation between a voter’s political knowledge and a specific set of foreign policy preferences?

H1: r ≥ .50

H2: r ≤ .49

For both questions 1 and 2 I am unable to reject the null hypothesis. In the case of question 1 I could demonstrate no difference between the answers given by the most informed and least informed. In the case of question 2 I was unable to demonstrate a relationship between the informed score and any particular policy preferences.

Is there Wisdom in Crowds?

In The Myth of the Rational Voter, Bryan Caplan explores and then quickly dismisses the notion that there is a “miracle of aggregation”. The miracle of aggregation is the concept that, if you get enough people to answer a question, the aggregate answer is more likely to be correct than the answers of any given individual; it is the concept that there is some sort of “wisdom of crowds”. As Caplan put it, “It reads like an alchemist’s recipe: Mix 99 parts folly and 1 part wisdom to get a compound as good as unadulterated wisdom” (Caplan 8). Caplan asserts that the miracle of aggregation does not apply to politics because there is a systematic error; that is, low information voters have an anti-foreign, anti-market bias (Caplan 10).

My results seem to demonstrate at least a nominal wisdom of crowds, or at least wisdom of the crowd I surveyed. For example, although the mean information score was 7.40 out of 14, or 53%, the aggregate information scores of the group 11 (11 questions where the most common answer was correct) out of 14, or 79%. And the pro trade or pro immigration answer was by far the most common answer on the questions that concerned those issues as well. As I mentioned earlier, my sample did not perfectly reflect the demographic profile of the United States, so I do not claim that my results refute the findings of a well-established economist, but my findings certainly raise a number of questions about relationship between information and policy preference and about the potential wisdom of crowds.

Opportunities for Further Research:


This survey raises almost as many questions as it answers. I touched on a few of the issues I would like to explore further during my analysis, but here are a few areas that I think warrant further exploration:

1. I would like to see a survey with a sample that is more reflective of the average American. It might also be interesting to see the difference between college students and non-college students.

2. I would like to see a survey where small groups of participants were led in discussion rather than given a multiple-choice questionnaire. I had one participant write notes in the margins of their survey, it would be interesting to see how overall opinion dynamics might change if respondents were not limited to the answers on the test.

3. Related to #2, I would like to see the results if respondents were briefed on the issues prior to questionnaire being handed out. These briefings could take the form of a “mini campaign” with one briefer presenting an “anti” position, and the other briefer presenting a “pro” position on various issues.

The Results: Trade, Immigration, the Environment

When it came to respondent’s policy preferences, I found no difference between the views of those with the 10 highest information scores and those with the ten lowest. I performed a 2-tail t-test on the aggregate globalization score of the two groups, with an alpha of .05 and t critical between 2.093 and -2.093. The result was t=. 59, well within the critical region, which led me to not reject the null hypothesis that there was no statistically significant difference between the globalization scores of the highest and lowest scoring participants. I repeated the t test for each group; money, security, people and energy/environment and the results are on the chart below:

Category------- T score---------T Critical------Accept/Reject Null?
Money---------------.88------2.093 and -2.093---Accept
Security------------.72------2.093 and -2.093---Accept
Energy/Environment--.70------2.093 and -2.093---Accept
People--------------.10------2.093 and -2.093---Accept






As you can see, the null, or hypothesis of no difference, was not rejected for any category. Given this information the only logical conclusion is that there was no significant statistical difference between the policy choices of the most well informed and least well informed.


When it came to testing correlations the results were similar. Overall, the correlation between having a higher informed score and being more supportive of all of the four flows, or even just one of the four flows, was very low.


Category__________Correlation
Money............... .06
Security............ .25
People.............. -.03
Environment/Energy.....24

The overall correlation was just .16, which is considered, at best, a very weak correlation.


Interpreting the Results: Trade and Immigration


The evidence does not seem to support Caplan or Keeter and Carprini’s assertion that there is consistently a difference in the views of informed and uninformed voters. My results seem to suggest that there is a fairly even dispersion of opinion regarding globalization and foreign policy between the informed and uninformed.
That is not to say that my results are necessarily “right” whereas the results from other researchers are necessarily “wrong”. Most of the surveys referenced in the literature focused a wide variety of questions, whereas my survey had specifically tailored questions about globalization and foreign policy. Also, I set my survey up to ‘look like an election’ with multiple-choice questions consisting of clear policy alternatives. Keeter and Carprini and Scott Althaus based their research on Roper phone polls that had often allowed the participant to give feedback. It is difficult to tell whether or not my results would have been different if I had led my participants in discussion rather than simply asking them to fill out a questionnaire.
My test may also be different from the one used by Bryan Caplan, because he based his research on the SAE survey, which asked respondents more theoretical questions. For example, the SAE asked respondents how much a given situation impacted economic growth, such as “There are too many immigrants” (Caplan 59), whereas my test put a respondent in the shoes of a voter, asking them to chose between one candidate who was supportive of and another candidate who was hostile to immigration. Here is an example of one of three questions that I used to gauge opinions on the flow of people:


Bill Gates recently appeared before congress and argued that we need to increase the number of high skilled workers we allow in the country, where do the candidates stand on this issue.


Candidate A. I’m against it. I worry that those immigrants will be taking jobs away from hard working Americans. We need to educate our own people instead of bringing in foreigners.


Candidate B. I believe we should allow more highly skilled workers in this country. We should welcome talented people because they are what make America great.


Perhaps low information respondents to SAE were looking at the question theoretically, believing that it sounds reasonable that immigrants harm economic growth, but did not feel that my example of an anti-immigration candidate was particularly appealing. Keeter and Carprini remind us that voters after use heuristics to determine which candidate will get their vote, so perhaps my imaginary candidate reminded respondents of a candidate that they did not vote for in the past. In any case, it must be noted that 60% of respondents chose answer b and that pro immigration answers were evenly split between low and high information voters.


There are also demographic factors to consider. By testing college students, I assured that my participants were probably smarter and better educated than the population at large. For example, although the Census Bureau reports that just 7.4% of Americans 25 and over have an associate’s degree (U.S. Census Bureau ), 40% of the participants in my sample already had an associates degree and presumably the majority would have a 4 year degree within a 2 or 3 years. This again makes my sample unique, because only about 17% of the entire population 25 and over has a bachelor’s degree (U.S. Census Bureau ). But, as I demonstrated in section 1, previous research indicates that the information effect should exist independent of the level of formal education. Also, women outnumbered men in my survey nearly 2 to 1, and Caplan suggested that women often come to different conclusions than men when it comes to policy preferences. But Caplan suggests that women are less likely to support globalization, but, again, I detected no correlation between gender and globalization scores.


The policy preference scores were somewhat like the civics question scores; there were example of individuals who attained a very low globalization score but the aggregate score was actually very solid. Out of 10 possible “4-flows” points, the low score was 3.5, indicating something akin to an isolationist position on most issues. But the mean was 6.4 out of 10 overall and when the questions are reduced to only issues related to trade and immigration the mean is 4.23 out of a possible 6 or a roughly 71% approval for the candidate with pro globalization views. Overall, the responses to questions about trade and immigration paint a picture of an American public that is not prepared to give into more isolationist impulses, even in light of the current economic downturn.


Interpreting the Results: The Flow of Security


Respondents were generally receptive to trade and economic globalization but they were far more reticent about the flow of security. I built my security questions around a hypothetical humanitarian intervention in the fictional African country of Zoombia. The Zoombia scenario was a rough amalgamation of various humanitarian crises that the international community has faced since the end of the Long War. There was a leader who refused to leave after loosing an election, as Manuel Noriega had done in Panama and as Robert Mugabe is doing in Zimbabwe. There was ethic cleansing, as had happened in the Balkans, Rwanda and Sudan. And there was mass starvation, as had occurred in North Korea and Zimbabwe. Overall respondents were unlikely to support a military intervention to topple the leader Zoombia, even when casualties were guaranteed to be low but when the U.N. Security Council authorized action the answers changed.


Question 13 read:
Zoombia is a small country in Africa. Their leader, General Tyler, declared himself Dictator for life after he lost an election several years ago. Since that time the economy of Zoombia has deteriorated and last year a famine killed 600,000 people. Reports have recently started coming in the General Tyler has blamed an ethnic minority for the famine and has unleashed his army on the countryside to murder every man, woman and child who claims membership in the minority. The CIA estimates that 1 million people could be killed in the next few weeks. General Tyler has resisted all diplomatic overtures and the U.S. does not even recognize his government. The U.N. Security council is deadlocked and is not expected to authorize military action. How should the U.S. react?


A. It’s a tragedy but none our business.

B. Sanctions and diplomatic pressure.

C. We should use air power to destroy the Zoombian army but not send ground forces.

D. We should invade, depose general Tyler and do what it takes to rebuild the country.


The most common answer to this question, with 13 responses, was b, signaling support for the U.S. to use sanctions and diplomatic pressure. 2 respondents answered c, in support of air strikes, 2 respondents supported a full on invasion of Zoombia and 1 felt that the situation was none of our business. Finally, two respondents gave no answer at all.


At first blush it may seem that a reluctance to invade another country is the logical outgrowth of the U.S. sustaining several thousand casualties in Iraq. But I followed up question 13 by asking respondents if they would change their answer if they knew that the U.S. would sustain fewer than ten casualties. Only 1 respondent changed their answer, and that person changed their mind from supporting airstrikes to supporting an invasion. With this information in mind it would be difficult to argue that respondents were only reluctant to support an invasion because of a fear casualties. Also, the original answer set provided a “Leviathan” option for airstrikes, which historically have resulted in very little risk to U.S. personnel, but only 10% of respondents supported airstrikes.


One factor that seemed to move people to support military action was U.N. approval. In question 16 I asked if the respondent’s answer would change if the U.N. approved the operation. This time 9 respondents stood opposed to any action, 4 supported airstrikes and 3 approved of a U.S. invasion with U.N. peacekeepers. Leaving out the two respondents who chose to give no answer, and including the two respondents who had supported an invasion from the beginning, U.N. approval seems to bring support for military action against Zoombia to 50/50, although those who support action do not agree on what type of action should be taken.

Setting aside hypothetical interventions for a moment, I also asked the respondents how they felt about Operation Iraqi Freedom and the subsequent occupation of Iraq. Question 21 was, “Do you believe the war in Iraq was mistake?” 65% of respondents answered yes. This is very close to the 63% of Americans who reported that they believed that Iraq was mistake (Silva) in a recent Gallup poll, so my sample was pretty representative of the views of all Americans with regard to the war in Iraq. One might assume that low support for the war in Iraq is connected to low support or interventions abroad, but I must point out that I was able to demonstrate no correlation between a higher security score and support for the war in Iraq. Also, Question 20 asked respondents what type of action they might support if Iran continues its nuclear program, the most common answer, with 11 responses, was to increase sanctions. No respondent chose answer ‘d’, which was military action.
On the subject of security, I also asked respondent what situation posed the greatest threat the American security in the 21st century. Question 22 read:


What do you think represents the greatest threat to American security in the 21st century?
a. China
b. Terrorism
c. Food shortages
d. Global Warming
e. Populist demands for Protectionism


There was no majority for any one answer, but the most common answer was ‘b’, terrorism, with 7 responses. Interestingly, the second most selected answer was China, with 6 responses. Food shortages had 1 response and two participants selected a more than one answer. No one chose global warming as the number one security threat for the 21st century.


The results of my survey seem to suggest that the new president should be weary about getting American troops involved in actions abroad without international legitimacy. This is a topic I will address at some length in section three, but for now it is sufficient to say that there is an extremely complex relationship between public opinion and military actions abroad and the results of my survey are not, from a historical prospective, all that unusual.

Interpreting the Results: The Environment
When it came to environmental issues respondents were divided between supported the U.S. singing a binding treat on carbon emissions but seemed slightly less enthusiastic when it came to a logical means of accomplishing that goal. Question 18 read:


Would you be willing to support the U.S. signing onto a binding treaty with other nations to cut carbon emissions?

a. Yes, if developing nations such as India and China have to play by the same rules we do.

b. Yes, we should set an example regardless of the actions of other nations.

c. No.


The most common answer was ‘b’, with 12 out of 20 respondents agreeing that the U.S. should sign a binding treaty to reduce C02 emissions regardless of the actions of other nations. Question 19 asked if the respondent would support a carbon tax, and only 11 respondents said that they did. This discrepancy in answers makes me wonder whether or not the respondents fully understood the issue of carbon reduction, because many solutions, including the Kyoto Protocol, would result in carbon being effectively “taxed” either directly or by being “capped”. The one respondent who supported a binding treaty to reduce carbon but did not support the taxing of carbon makes me wonder if answers to this question would have changed if the participants had been fully briefed on the issues before being given the survey.

The Results: Civic Knowledge

I administered my survey to a sample of 20 students currently enrolled at Capital University. I administered the survey on two separate dates, November 1st and November 5th, 2008, each time to a class taught by Dr. Mike Yosha. Capital University has undergraduate programs geared towards both traditional undergraduates and working adults, and the classes that I surveyed had both traditional and non-traditional students. The class on the 1st was a 100 level class on globalization, and the class on the 5th was a 100 level professional studies course; there were ten students in each course. While I did not ask the participants for their exact age, I did ask them to give their age range, and the most common answer was 39-48. Women made up the bulk of participants and they outnumbered their male counterparts 12 to 8.
When I set out on this project I had 3 specific questions in mind, and the answers to the first two were fairly surprising given the research I reviewed in preparation for the survey. First, I was unable to demonstrate any statistical difference between policy preferences of the informed and uninformed participants. Second, I was unable to demonstrate any significant correlation between the globalization scores of the informed and uninformed


When it came to raw scores on the civics test, results were actually fairly close to what the research suggested. Keeter and Carprini’s research suggested that approximately half the respondents would fall into the category of “uninformed” and my survey results had a median score of 7 (out of a possible 14) and about half the participants scored less than 50% on the civics test, placing them firmly in the category of people who do not know much about what government ‘is’ or ‘does’.

Looking at some specific questions outlines just how little some participants knew about government. For example, question 31 asked:


Which of these statements most accurately describes the official power that a president wields over the economy?
a. Per Article II of the Constitution, Presidents are responsible for creating jobs and keeping interest rates and taxes low.
b. Presidents use executive orders to set prices of public goods such as electricity and oil.
c. Presidents have the power to appoint a treasury secretary and the Fed chair (if the Fed chair happens to be up for reappointment), beyond that they have little direct control over the economy.
d. The President sets interest rates and adjust the value of the dollar against a basket of other currencies.


Seven out of twenty (35%)of the respondents did not answer this question correctly. Two provided no answer, one respondent answered ‘d’ and two respondents answered ‘a’ and two answered ‘b’. This is a good time to note that each respondent was a current student at a University with a selective admissions policy.
While looking at some of the individual scores was almost frightening, a funny thing happened when all the scores were added together; the participants became almost perfectly informed. With just three exceptions, the most common answer to every question was the right answer. For example, on question 31, although 7 people got it wrong, the group overwhelmingly chose the right answer. This data suggests that, although individuals may be incorrect in answering specific questions, in aggregate the American public may be very well informed.


Of the three exceptional questions, where the majority chose the wrong answer, 2 concerned the names of people in key government positions and one question concerned knowledge of the U.S. Constitution. Question 7 asked, “Who is the current Secretary of Defense?” While 5 respondents correctly identified Secretary Robert Gates, 6 respondents identified General David Petraeus, the current head of CENTCOM, as the defense secretary. While the selection of Petraeus as SECDEF does belie a lack of knowledge about who’s who in the cabinet, it might also reflect the fact that the general has become a prominent media figure over the last year, whereas secretary Gates has maintained a somewhat lower profile than his predecessor, Donald Rumsfeld. On a related note, Rumsfeld, whose daily give and take with Pentagon reporters were fodder for news broadcasts, was chosen by the same number of participants as Secretary Gates. Another thought is that General Petraeus is often seen in photographs and video clips wearing his U.S. Army uniform, whereas Gates is generally seen wearing the same dark suit that every congressman, senator, cabinet secretary and lawyer wears, so perhaps respondents visually connect Petraeus to the military and assume that must means he is SECDEF. With this information in mind, it might be interesting to ask questions like this in future using picture of choices rather than simple text. This question saw a fairly even distribution of correct answers between informed quartiles. 40% of participants in the top quartile answer the question correctly, 20% in the second quartile, 40% in the third quartile and 20% in the forth quartile.



Question 8 is another example of the majority choosing the wrong answer and maybe also be an example of a high publicity figure being chosen over a less well-known public figure. Question 8 read, “Who is the current senate minority leader?” The correct answer, Kentucky Senator Mitch McConnell, was the choice of just 4 participants. The most common answer was Nancy Pelosi, who is actually the Speaker of the House. Like with Gates versus Rumsfeld and Petraeus, Pelosi is a higher profile figure than Mitch McConnell. Pelosi’s higher profile is due to the nature of the Speaker of the House, a position that comes with both more power and a higher profile than any given position in the senate. It must be noted that Pelosi is not only a member of the wrong house, but also of the wrong party to be minority leader. This indicates that, although participants may have recognized Pelosi’s name, they apparently did not know which house she served in nor did they apparently know which party had minority status in the either the house or senate. The top quartile of informed participants, those who scored more than 10 correct, were most likely to get the answer correct, with 3 out of 5 correctly identifying McConnell. Scores dropped off significantly in the second quartile, with only 20% selecting the correct answer. No participants in the bottom half (scoring less than 50% on the civics quiz) selected the correct answer to question 8.


The final question where the majority got the answer wrong concerned the U.S. Constitution, and seems to revel some confusion about the difference between a Constitutional Article and a Constitutional Amendment. Question 32 read, “Article I of the Constitution deals with?” Just three people selected the correct answer, which was answer b, congress. The majority clearly confused Article I with the first amendment, because 10 people selected answer c, “The right freedom of religion and speech,” as the correct answer. Like question 8, the correct answers to question 32 were heavily concentrated in the upper quartile. 2 out of 3 correct answers came from those in the top quartile, whereas one correct answer came from a participant in the third quartile. There were no correct answers in either the second or fourth quartiles.

The Table of Contents to my Undergraduate Thesis

Table of Contents
INTRODUCTION: 3
On Informed Versus Uninformed: 6
SECTION 1.0: PUBLIC CHOICES 7
Uninformed Choices 10
Uninformed Bias 13
THE HUNT FOR UNINFORMED VOTERS OR HOW I LEARNED TO STOP WORRYING AND BLAME CANADA! 19
CONCLUSION: 21
SECTION 2.0: THE SURVEY METHODOLOGY 23
THE RESULTS 25
Interpreting the Results: Trade and Immigration 31
Interpreting the Results: The Flow of Security 34
Interpreting the Results: The Environment 38
CONCLUSIONS: 39
Is there Wisdom in Crowds? 39
Opportunities for Further Research: 40
SECTION 3.0: GRAND STRATEGY AND THE AMERCIAN VOTER 42
ON GRAND STRATEGY AND MORAL WARFARE 42
A Strategic Theory in Search of a Policy 45
…And it’s 1,2,3 what are we fighting for? 51
All the Wrong Lessons 56
PUBLIC CHOICE IN THE AGE OF GENOCIDE: THE BALKANS 58
The End of History 59
The Constitutional Order of States 60
Eve of Destruction 63
A Mirror Image: Vietnam and Bosnia 66
The Man Comes Around 70
17 Resolutions 70
PUBLIC CHOICE AND A STRATEGY OF PRECLUSION 73
The Next Epochal War 75
Globalization versus Globalization with a Goatee 79
CONCLUSION: BARACK OBAMA AND THE NEXT AMERICAN CENTURY 85
APPENDIX 89
BIBLIOGRAPHY 108

The Abstract to my thesis

Part public opinion poll, part diplomatic history; this paper examines the link between American Grand Strategy and public opinion. I begin by examining the link between a voter’s civic knowledge and their policy preferences on issues related to trade, immigration, security and the environment. I find that, contrary to the literature I reviewed on public opinion, Americans are generally supportive of immigration, international trade and international environmental regulations regardless of their level of civic knowledge. I also found that, even in light of America’s experience in Iraq, Americans appear to be prepared to support humanitarian interventions within certain parameters. When I examined my findings in the context of two past American foreign interventions, I came to the conclusion that the American public today is eager to see the U.S. participate in both the global economy and to continue its traditional role in the international security environment. In fact, in many ways, the American public may be too internationalist, in that they may have too much faith in international institutions such as the U.N. and NATO.