The Huffington Post Pollster Policy For Including Polls In Charts

From our founding in 2006, HuffPost Pollster (originally Pollster.com) has aimed to report the results of every public poll that claims to provide a representative sample of the population or electorate. Our criteria have always been expansive — we have included polls of varying methodology, including automated, recorded voice telephone polls and online surveys using non-probability internet samples.

In 2010, we toughened our criteria, requiring all polls from new organization (or those new to us), to meet all of the minimal disclosure requirements of the National Council on Public Polling. We have always excluded polls that fail to disclose survey dates, sample size and sponsorship; however, going forward, and consistent with these policies, we may also choose in our editorial discretion to exclude polls or pollsters that do not provide sufficient methodological information for us or our readers to determine their quality.

The required elements are:

* Sponsorship of the survey

* Fieldwork provider (if applicable)

* Dates of interviewing

* Sampling method employed (for example, random-digit dialed telephone sample, list-based telephone sample, area probability sample, probability mail sample, other probability sample, opt-in internet panel, non-probability convenience sample, use of any oversampling)

* Population that was sampled (for example, general population; registered voters; likely voters; or any specific population group defined by gender, race, age, occupation or any other characteristic)

* Size of the sample that serves as the primary basis of the survey report

* Size and description of the subsample, if the survey report relies primarily on less than the total sample

* Margin of sampling error (if a probability sample)

* Survey mode (for example, telephone/interviewer, telephone/automated, mail, internet, fax, e-mail)

* Complete wording and ordering of questions mentioned in or upon which the release is based

* Percentage results of all questions reported

We are now seeing a greater proliferation of “do it yourself” polling — polls conducted with an open-source survey interface. These “do it yourself” polls may sacrifice the accuracy, rigor and credibility that we may find in other sources. Therefore, going forward, we reserve the right as we determine in our sole editorial discretion to exclude such polls when conducted without active involvement by election pollsters or media organizations, or when not conducted or adjusted using methods that approximate the population or electorate. Additionally, telephone polls that do not make some effort to include cell phone only users — 48 percent of American adults, and even higher proportions of young people and minorities — may be excluded at our discretion.

As part of our editorial scrutiny, and as we’ve always done, we won’t include poll questions that differ substantially from other polls’ questions on the same topic. Specifically, we only include closed-ended trial heat poll questions, and will not include any open-ended questions or those that provide information that will not be on the ballot. If the poll asks questions with different combinations of candidates, or leaves some candidates out of the question, we will only use the data from the question with all relevant candidates listed. We do not include the poll if the field period overlaps the same poll’s prior estimates; for example, if a poll reports a three-day rolling average each day, we will only include the poll in our charts when the pollster puts out a release in which all of the data are new. If a pollster releases data with estimates for more than one population on a vote preference question, we will use the sample that most closely approximates the likely electorate — if they release registered voters and likely voters, we will use likely voters in the chart. If they release adults and registered voters, we will include registered voters in the chart.

We believe our policies will benefit our users and help provide meaningful information.

Questions? Email us at pollster-feedback@huffingtonpost.com.

For some frequently-asked questions, see below. We will add to the FAQ over time, but in the meantime if your question isn’t answered feel free to email!

Questions

How can hundreds or thousands of interviews represent the opinions of an entire population, like all Americans?

There are a couple of different ways to answer this question, depending on how people are contacted to take the survey. The first assumes a random sample — which means every person in the population has a chance to be selected. In this case, we can assume that our sample resembles the population within a certain amount of error (often called the "margin of error"). The error is typically calculated to 95 percent confidence, so this means we can be 95 percent certain that the results, plus or minus the error, are what we would get if we surveyed the entire population.

If researchers can get a good sample and a good response rate — meaning a lot of the people contacted actually completed the survey — as we see on the Census and other very high-quality surveys, the sample will provide a good representation of the population. Ideally, for random sampling, researchers begin by identifying their population and getting a list of all members of that population from which to select the sample—the people they will attempt to survey. If the researcher has a complete list of the population, each person has the same probability of being selected into the sample, and we can say that the results we get from surveying that sample are “generalizable” to the entire population.

In the absence of a good sample or a good response rate, researchers have to adjust the sample, or “weight” it, to force the sample to match the population on the desired characteristics (typically age, gender, race, income, and similar demographics). Weighting or adjusting the data to force it to match the population has become necessary in recent years as the average random sample telephone survey has seen increasing issues with getting a truly random sample.

In the past, researchers could randomly dial phone numbers and fairly easily achieve a random, probability sample that could be generalized to the population since nearly every household had a telephone. However, as people give up land line telephones for cell phones, the picture has become more complicated. In order for a telephone survey to represent a population, it should include both land lines and cell phones. (link to mode FAQ section that doesn’t exist yet) Response rates are also lower than ever before. The result is that researchers have to weight the sample to make the data generalizable to the population.

The second way to answer the question of representation in a sample addresses nonrandom samples, or “nonprobability” samples. This means we do not know the probability of everyone in the population being selected, so we can’t assume the sample is representative of the population.

The most common nonprobability sample uses the Internet to collect data. There is no way to randomly sample people via email, so these rely on “panels” of people who volunteer to be surveyed. A small number of companies go to great lengths to adjust for the bias of not having a random sample, both when the sample is selected and using weights to make sure the sample looks like the general population (whereas most phone surveys only use weights). Many internet surveys, however, are not using sampling procedures to help make the sample representative of the population, and these should be treated cautiously. If the survey’s methodology does not mention steps taken to make the sample look like the population, it probably is not generalizable to the entire population.

It is also difficult to measure error in nonprobability samples. Unlike with true random samples, we don’t know how much error to expect between the sample and the population with a simple margin of error calculation. Again, some web survey companies have dealt with the problem and some have not. If an Internet survey provides a margin of error without explaining how the margin was calculated to adjust for the nonprobability nature of the sample, it should not be trusted.

For more information on this topic:

Why does question order matter?

When people answer survey questions, asking a specific question can make them think about a topic beyond that one question, especially if they haven’t given it much thought. They might continue to think about the topic, as well as how they answered the question, during the rest of the questions. If a later question is somehow related to that initial question that the respondent has been thinking about, the thoughts spurred by the initial question will influence how the respondent answers the later question.

For example, let’s say a poll begins by asking “Do you approve or disapprove of the job Senator Mary Landrieu is doing handling the Keystone XL pipeline issue?” and then follows that by asking “In the upcoming election, do you plan to vote for Mary Landrieu, the Democrat, Bill Cassidy, the Republican, or someone else?” The answer to whether the person approves of the job Mary Landrieu is doing in office could directly influence their answer to the vote question, particularly if they aren’t certain of how they will vote, but might not be something they would consider in the voting booth when they vote. We would want to ask the vote question first, then approval. Although the vote answer could also affect the approval question in that order, the vote question is typically the more important one in the poll. When questions will affect each other, ask the most important ones first and ask the general questions before specific.

Why is a question worded that way?

Questions can easily be worded in ways that will affect responses. Good survey questions, and their answer options if any are provided, should present all sides of an issue without impacting how the respondent will answer. For example, a question that asks “Do you approve of the job President Barack Obama is doing in office?” is biased because it only asks if someone approves. When only one side is provided in the question, people are much more likely to answer the question with that side. The preferable wording would be “Do you approve or disapprove of the job President Barack Obama is doing in office?” which presents both possible options and provides respondents the cue that it is acceptable to disapprove.

Why are those the only responses listed?

There are two basic types of answers for survey questions: closed-ended and open-ended. Most questions are closed-ended, meaning that the people answering the questions are given a list of options from which to choose. They cannot just give whatever answer they want, although sometimes an “other” option is included for those who do not want to choose any of the existing options. Those writing the survey carefully choose which answer options to put with each question, ideally making sure that the options do not overlap but are general enough to capture the vast majority of opinion on an issue.

Sometimes questions are open-ended, meaning the poll-taker can answer the question however they want. Over the telephone an interviewer will record what the respondent says, and for online or any type of self-administered survey the respondent will write or type out their answer. The disadvantage to this type of question is that answers are difficult to analyze. In a survey of 1,000 people you will have up to 1,000 different responses, which are much more difficult to summarize and work with. For this reason, most survey questions are closed-ended.

Asking the same question in closed-ended and open-ended format can produce different results. For example, when respondents are given a slate of candidates from which to choose their preference, they will most often choose one from the list. However, if the respondents have to come up with a name on their own (the question is open-ended), the answers will be different, and as illustrated by the example here, sometimes more will be undecided about their preferences without the list.

For more information:

I’m on the federal “do not call” list. Why do I still get phone calls from pollsters and survey researchers?

Survey research and polls are not restricted by the federal Do Not Call Registry. The registry was created to prevent telemarketers from making unwanted calls to those phone numbers on the list. Survey research and polling were explicitly exempted from this policy since they are not selling anything, but are only asking for opinions. Most survey companies will respect any request to be removed from the contact list, but remember that you could still receive survey calls from other companies.

For more information:

I’ve never known anyone who has the opinion reported in that poll. How can the poll be accurate when everyone I know thinks something else?

People tend to live and work with people like them. They might think there’s a lot of diversity within their group of friends and coworkers, but odds are it’s not as diverse as they think. A survey is designed to include every possible demographic and type of individual that exists in the population, even those with very different views than most. In this way, the survey is probably more representative of the population than any one person’s network, and more diverse. Remember that there are well over 300 million people in the United States, so chances are a good number of them have very different views. Even if a small proportion of people share the views that you think are dominant, that’s a lot of people.