What kind of data do political sciences researchers have access to?
This lesson addresses some of the different types of data analysed by political scientists. First, let’s look at research which uses experiments. This is because research by experiments is considered the “gold standard,” and other methods usually attempt to replicate the underlying logic of experimental design in conditions that do not really make experimentation possible. We will then look at survey research, because it is so widely used and because it illustrates very well the opportunities and problems that result when research is taken out of the lab and into the real world.

Experimental Design

In a classic experimental design, there is an independent variable, which the researchers control, and a dependent variable, which results from (presumably) the independent variable. To conduct the experiment, participants are randomly divided into two (or more) groups, and each group is assigned to a value of the independent variable. For example, a medical researcher investigating a possible treatment for cancer may randomly assign mice with cancer to two different groups, one of which (the experimental group) will receive the treatment while the other (the control group) will not. The researcher will then measure, for example, differences in survival rates between the two groups.

In a pure experimental design one would have:

  • Randomly assigned subjects (R)
  • Pre- observations (O1) of the subject group
  • An intervention by the researcher (eXperiment)
  • Post observations of the subject group (O2)
  • A control group’s observations (O3 , O4)

The classical empirical research design is often summarised as:

    R  O1   X   O2
    R  O3       O4

This design is not very common in social science research, but it expresses an ideal world for the researcher. The researcher can examine the state or condition of the subjects prior to an intervention (O1) and compare it with the state or condition afterwards (O2). The assumption is that the changes between O1 and O2 are as a result of “X”. However, there are often competing explanations for the differences between O1, and O2 so the researcher uses R O3 , O4 as a mechanism to control for all those explanations the researcher cannot foresee. This latter group is called the control group.

If we wanted to prove that a research methods course improved a student’s ability to read and understand the literature in international studies, then we could randomly assign students to research classes and other students to non-research courses. Before the beginning of classes, we would assess all students and their reading and understanding of the literature. We would then give the randomly assigned students in the research course a full term of instruction. At the end of the term we would again evaluate all students and their reading and comprehension of the literature. This would be a classical research design and —

  • Ideally O1 would be roughly equal to O3
  • Ideally O2 would be higher than O1
  • Ideally O4 would be higher than O3 because some improvement in reading the literature should be developed in all courses.
  • Ideally O2 would be higher than O4

The design, along with the statistical analysis of the assessments, enables us to see if the research class (X) caused a change. Cause is established by three factors:

  1. Did the researcher’s “X” precede the outcome (O2)?
  2. Is the X statistically associated with the changes in O2?
  3. Are no other explanations for the effect, because the random changes are accounted for by O3 and O4?

This would be the ideal situation, but we can easily understand why this type of research design is not more widely used in social sciences:

  • Students sign up for classes and are not randomly assigned. Therefore “R” is not feasible.
  • Testing everyone prior to the academic term would be costly. The O1 and O3 options — while not impossible — are unlikely.
  • At best we might find a comparison group at the end of students who would be willing to participate — but their investment might be considerably less.

This is why experimental design is not usually used in social sciences. Even though social science researchers know what an ideal research design is, it is usually not feasible. Because of that, in social science we talk about variables influencing other variables, and not causing effects. The independent variable “research class” influences the assessment scores of student’s comprehension, but it may not be the cause because we have not met the scientific standard for cause.

Experimental design has an important place in international research, both in the laboratory and in field studies. An example of a subject well suited to laboratory research is the study of political campaign ads. Stevens, for example, conducted a laboratory experiment in which undergraduates enrolled in introductory political science classes were shown negative (attack) ads.[1] He found that these ads increased the information levels of sophisticated subjects (those scoring high on an index of political interest and knowledge), but produced little or no gain among those less sophisticated. (He found similar results from non-experimental analysis using data from the American National Election Study.) An example of field research is the study by Gerber and Green of the impact of different methods of encouraging voter turnout.[2] The researchers randomly divided registered voters into different groups, one (the control group) that received no stimulus, the others (the experimental groups) that were contacted by mail, by phone, or in person with “get out the vote” messages. Gerber and Green found that face-to-face contact had the greatest impact, while telephone calls had essentially none.

Survey Research and Sampling

Survey research (or “public opinion polling”) is another “scientific” approach to the study of human behaviour, but it has as much an art as a science in it.
Good survey research makes great efforts to use rigorous sampling methods. It also devotes a lot of effort to careful design and pre-testing of questionnaires, training of interviewers, and processing and analysing data. On the other hand, survey research involves a great deal of uncertainty, and requires making a number of judgment calls about which reasonable people will disagree.

Random Sample

Sampling is very important in social science research. It is often impossible to study all of the relevant people, and so we instead take a sample from a larger population. Survey research is the most common, though not the only, social science application of sampling techniques.
Ideally, a sample should be random. In a random sample, there is an equal probability to be included for each item in the population.[3] This is important because a random sample provides an unbiased estimate of the characteristics of the population, so that respondents will constitute a representative sample of the population. Put another way, if 60 percent of a random sample of voters favour candidate X then, although it would be impractical to interview all voters, our best guess is that 60 percent of all voters also favour candidate X.

The reliability of this best guess will increase with the size of the sample. If we have only interviewed a sample of 10 voters our results will be a lot less reliable than if we have interviewed a thousand. Ninety-five times out of a hundred, a random sample of 1,000 will be accurate to within about 3 percentage points. Or in more formal words, such a sample has a margin of error, or confidence interval of approximately plus or minus (±) 3 percent at a 95 percent confidence level. If a random sample of 1,000 voters shows that 60 percent favour candidate X, there is a 95 percent chance that the real figure in the population is someplace in the range of about 57 to 63 percent.

activity

Identify sample size (Required Activity)

  • You can find an elegant on-line sample size calculator Creative Research Systems.
  • In the first dialog box;
    • select the 95% confidence level,
    • enter 3 for the confidence interval and 20000 for the population,
    • then click on “Calculate.”
    • What sample size do you need?
    • Without changing anything else, add a zero to the population size, changing it 200000.
    • How much does the needed sample size increase?
    • Add three more zeroes, making the population size 200000000 (two hundred million).
    • Note: beyond a certain point, the size of the population makes little difference. This means that if you were sampling people in a small town, you would need almost as large a sample as you would if your population consisted of the entire country.
    • In the second dialog box, select the 95% confidence level,
    • enter 1000 as the sample size and 20000000 as the population.
    • Leave “Percentage” at 50. This refers to the estimated percent of the population having the characteristic you are sampling, and 50 is the most conservative option.
    • Click on “Calculate.” What is the confidence interval?
    • Without changing anything else, double the sample size to 2000 and again click on “Calculate.”
    • Notice that the confidence interval is not reduced dramatically.
    • That is why surveys usually do not exceed more than about 1,500 respondents. The number of interviews is a dominant factor in driving the costs of a survey, and beyond a certain point increasing this number is not cost effective, since costs will increase almost proportionately, but the margin of error will be reduced only a little.

Cluster Sample

Often it is not practical to use a pure random sample. One common shortcut is the area cluster sample. In this approach, a number of Primary Sampling Units (PSUs) are selected at random within a larger geographic area. For example, a study of the Australia might begin by choosing a subset of regional coincils. Within each PSU, smaller areas may be selected in several stages down to the individual household. Within each household, an individual respondent is then chosen. Ideally, each stage of the process is carried out at random. Even when this is done, the resulting sampling error will tend to be a little higher than in a pure random sample,[4] but the cost savings may make the trade-off well worthwhile.

Stratified Sample

Somewhat similar to a cluster sample is a stratified sample. Suppose, for example, that you were studying opinion among students at a university, and wanted to be sure that the numbers of undergraduates, postgraduates, and research students were representative of the student body as a whole. You then can divide the student body into three strata representing these categories, and then select students at random from each of the strata in proportion to their share of the student population.
Sometimes, a research design will call for deliberate oversampling of some small groups so that there are sufficient cases to permit reliable analysis. For example, there are only few research students at the university, you might need to oversample research students so you would have enough of them on which to base comparisons with coursework students. However, any time analysis is carried out that combines the different strata, cases must we weighted in order to correct for this oversampling. In our example, failure to weight cases would result in overestimating the research student component of student body opinion.

Which sampling strategy?

What’s the essential difference between area cluster sampling and stratified sampling? Their purpose.

  • An area cluster sample is appropriate when it is not practical to conduct a random sample over the entire population being studied.
  • A stratified sample is appropriate when it is important to make sure enough respondents from subcategories of the population are included in the sample.
  • Important: sampling is conducted at random within each cluster or strata.

Many so-called public opinion polls fail to use random sampling methods. As a citizen, as well as a student of social science, it is important that you recognize such polls, and their severe limitations. You may have been part of a poll before. You may have been stopped at a shopping mall by someone with a clipboard, and asked some questions about your shopping habits. Maybe while reading a story online, you clicked on a question about taxation policy. Perhaps you filled out and mailed a questionnaire posted to you.

None of these surveys use real random sampling. One bias they have is self-selection: those who opt to stop, answer questions, click, or mail in a survey may well differ systematically in their views from those who do not. Even if those questioning customers at the mall are careful to include representative numbers of men and women, older and younger shoppers, people of different races, etc., this approach, called a quota sample, should not be confused with a stratified random sample, because there is no guarantee of representativeness within the various groups questioned. Those visiting the mall may have different views from demographically similar people who, for example, shop online or at farmer markets.

When statistical methods are used to make inferences about populations based on samples, they cannot legitimately be applied to samples that are non-probability based. It is best too avoid such samples – if possible. When you see surveys based on non-probability based samples reported in the media, you may find them interesting or entertaining, but should take their findings ‘with a grain of salt’ – as in, be skeptical about them.

Even in the best designed surveys, strict random sampling is a goal that can almost never be fully achieved under real-world conditions, resulting in non-random (or “systematic”) error. For example, think of a survey conducted by phone. Not everyone has a phone. Not all who do have a phone are home when called. A version of the questionnaire may not be available in a language spoken by the person to be interviewed, or the organisation conducting the survey may not have interviewers fluent in that person’s language. People may refuse to participate, especially if they have had their patience tried by telemarketers. The resulting sample of people who are willing and able to participate may differ in systematic ways from other potential respondents.

Apart from non-randomness of samples, there are other sources of systematic error in surveys. Slight differences in question wording may produce large differences in how questions are answered. The order in which questions are asked may influence responses. Respondents may lie.

When surveys are conducted by mail, they usually yield very low and often unrepresentative response rates, and so the preferred survey methods are face-to-face or via telephone. However, the Australian 5-yearly Census is still conducted by mailed surveys. Because of that, respondents sometimes do not return a Census form or fail to answer every applicable question, and this reduces the quality of the data obtained from it. In contrast, the Social Survey still employs face-to-face interviews wherever feasible.[5]

In general, however, telephone surveys have become the method of choice. The biggest advantage is cost. The per interview cost of telephone interviews is simply far less than what is required when interviewers are sent door-to-door (thus spending more time getting to interview sites and incurring travel expenses). There are other factors favoring the use of the telephone. Interviewers can be more easily trained and more closely supervised. Problems that arise can be dealt with on the spot. Computer Assisted Telephone Interviewing (CATI) technology can be employed for such things as random-digit dialing, call-backs, and data entry.

A disadvantage of telephone surveys compared to door-to door face-to-face interviews is their relatively low response rates. When the American National Election Study split its sample between face to face and telephone interviews for its 2000 pre-election survey, it obtained a response rate of 64.8 percent for the former, compared to 57.2 percent for the latter. An analysis of a number of telephone and face-to-face surveys showed that face-to-face surveys were generally more representative of the demographic characteristics of the general population.[6] Note that most telephone surveys produce response rates far lower than that obtained by the ANES, and “telephone survey response rates are on a precipitous decline.”[7] Another problem that has surfaced in recent years is that many (especially younger) voters rely entirely on mobile phones rather than on landlines. Calling mobile phones is more expensive.

These difficulties have led to at least two alternative approaches, the “robo” poll and the online poll, both often referred to as “interactive” surveys.[8] A robo poll, known more formally as an Interactive Voter Response (IVR) poll, is a telephone survey in which the questions are asked by a computer using the voice of a professional actor. Though these polls have very poor response rates (because it is much easier to hang up on a computer than on a fellow human being), the per-interview costs are much lower. The hope is that larger sample sizes can compensate for samples that are less random.

Another approach is the online poll, in which the “interaction” is conducted over the Internet. Like robo polls, online polls are also less expensive than traditional telephone surveys, and so larger samples are feasible. Because they require respondents to “opt in,” however, the results are not really random samples.

When samples, however obtained, differ from known characteristics of the population (for example, by comparing the sample to recent census figures), samples can be weighted to compensate for under or over representation of certain groups. There is still no way of knowing, however, whether respondents and non-respondents within these groups differ in their political attitudes and behavior.

Content analysis

Statistics can be applied to data that we normally think of as qualitative. Content analysys does exactly that. It examines texts or documents, which has long been important in international research. Content analysis attempts to make such study more rigorous. An example is the [eventdata.psu.edu|Penn State Event Data Project]. Researchers there have developed software to read, code, and analyse extensive electronic document collections (e.g., the Reuters wire service reports) in order to study patterns of interaction between nations. Their hope is that understanding these interactions better might help avoid international conflict.

Other forms of data used in international studies are discused in the module examining qualitative research methods.

key points

Key points: Varieties of data

  • Area cluster sample
  • Computer Assisted Telephone Interviewing (CATI)
  • Confidence interval
  • Confidence level
  • Content analysis
  • Exit polls
  • Experimental design
  • Interactive Voter Response (IVR) poll
  • Margin of error
  • Population
  • Primary Sampling Units (PSUs)
  • Quota sample
  • Random sample
  • Representative sample
  • Robo poll
  • Sample
  • Stratified sample
  • Survey research
  • Universe

activities

Activities: data analysis exercises

Share what you learned from these practical exercises with the group in the discussion forum.

  1. Assume that you wish to survey students at your university about their opinions on various issues, their political party of choice, and their voting intentions in the next election. Design an appropriate questionnaire, decide how many students you will need for your sample, and spell out how the sampling will be done. (If you are actually planning to carry out such a survey, be aware that your institution has, or should have, rigorous legal and ethical standards for conducting research involving human subjects. Before carrying out your research, your research design will need to be approved by your University’s ethics committee. It would require plenty of time to find out what the standards are, to gain such approval, and to incorporate them into your design.)
  2. Find two journal articles with survey research, and determine:
    1. What kind of sampling do they claim they employed
    2. What kind of sampling the actually employed
    3. How their sampling may have affected their results.
    4. Post your answers in the discussion forum.
  3. Find the transcripts of the acceptance speeches given by Tony Abbott and Julia Gillard when each of them won an election. Copy and paste them into a word processor (such as MS Word). What words or terms do you think were especially closely linked to each of the candidates? Search each of the four documents to see how often this word or term was used by each of the speakers. (In Word, use the “Find” feature in the “Home” menu to search a document.)

Further readings

Experiments:

Survey Research:

Content Analysis

References

  1. Daniel Stevens, “Separate and Unequal Effects: Information, Political Sophistication and Negative Advertising in American Elections, Political Research Quarterly 58 (September 2005: 413-425.
  2. Alan S. Gerber and Donald P. Green, “The Effects of Personal Canvassing, Telephone Calls, and Direct Mail on Voter Turnout: A Field Experiment,” American Political Science Review 94 (Sept. 2000): 653-664.
  3. Strictly speaking, a distinction needs to be made between a “simple random sample,” in which 1) each item and 2) each combination of items in the population have an equal probability of being included in the sample, and a “systematic random sample,” which meets only the first condition. An example of a systematic random sample would be one chosen by selecting every 100th name from a phone directory. In this case, two persons who are listed adjacent to one another in the directory would never end up in the same sample. For most purposes, however, this distinction is of no great practical importance.
  4. Herbert M. Blalock, Jr., Social Statistics revised 2nd ed. NY: McGraw Hill, 1979: 568-569.
  5. General Social Survey, “FAQs: 6. How is the GSS administered?”, http://publicdata.norc.org:41000/gssbeta/faqs.html#6. Accessed February 20, 2013.
  6. Charles H. Ellis and Jon A. Krosnick, “Comparing Telephone and Face-to-Face Surveys in Terms of Sample Representativeness: A Meta-Analysis of Demographic Characteristics,” National Election Studies: Technical Report #59. http://www.electionstudies.org/resources/papers/documents/nes010871.pdf. April 1999. Accessed February 20, 2013.
  7. Marketing Charts, “Telephone Survey Response Rates Dropping; Accuracy Remains High” http://www.marketingcharts.com/wp/direct/telephone-survey-response-rates-dropping-accuracy-remains-high-22107/
  8. Among organisations conducting robo polls are Rassmussen Reports and SurveyUSA. Online polls include those by Harris Interactive and CONNECTA. For discussions of these types of polls, see http://www.pollster.com/blogs/ivr_polls/ and http://www.pollster.com/blogs/internet_polls/.

Credits

John L. Korey 2013, POLITICAL SCIENCE AS A SOCIAL SCIENCE, Introduction to Research Methods in Political Science:
The POWERMUTT* Project, [1]