If I asked you to call 1,000 people today and screen each person for 1 minute in the hopes that 500 of them would agree to participate in an upcoming survey, you could not do it. If you spent 60 seconds making 1,000 calls it would take over 16 hours, without a break.

Strictly speaking, doing a poll is not easy. It requires many steps, each very important to the outcome. One major issue is locating sample subjects. How do you find these people? Today, a truly random representative sample for a poll would require one to either knock on doors, send out snail mail, or send out emails, text messages, or tweets asking people if they want to take a survey. You could also start making cold calls to people’s landlines and smart phones. Today, most people will try to make cold calls, although some people will try to find people online as well.

Let’s consider the use of the telephone. I don’t know about you, but I don’t answer my phone if I do not know who is calling. It is going to be tough to find good people using the telephone. After all, if the person on the other end of the line does not seem to be in their right mind, then ethically you are not supposed to include them – no people in advanced stages of dementia. However, the more people you call, as the weeks turn into months in the 2016 election, the harder it will be to find people you have not already polled, who have not already answered their phones, who are willing to do it for free, and remember these have to be people who were willing to pick up their phone for an anonymous caller in the first place.

You will also have to approach many more than 500 people every time you want to run a poll because so many of the people you call will not want to participate. Once you have 500 people you believe you have found randomly, and who represent the universe of voters at large, that is where the rubber meets the road. Now you have to obtain their permission, and explain to them what you are doing. The onus is on the pollster to ensure that the person on the other end of the phone is not an Alzheimer’s patient, or a fifteen-year-old imitating one of their parents.


Ant&Carrie Coleman / Flickr

 Once you have done all of the above, then you can finally ask who they will vote for this year.

Then, you either have to start looking for the next pool of sample subjects for the next poll, or face the decision to continue using the same subjects again and again.The problem with using a panel of repeat subjects is that while it may bypass the challenge of having to find a new sample of random representative subjects for every poll, one also has to contemplate the very real demon of saturation.A pollster cannot keep calling the same people every week throughout the entire campaign. People get tired, they get bored, they lose interest – especially if they are doing this for free, from the goodness of their own hearts, for the sake of democracy. For example, people on the Los Angeles Times panel may be giving great information for that poll. On the other hand, some of them may be getting tired of giving their answers for free every week all year long while the Los Angeles Times makes money using their information. The real problem is that it may be one or the other or somewhere in between. In a climate where polls are run every week for months and months, it becomes difficult to gauge all of these details with accuracy.

Bear in mind,this is not the formula for producing a national survey for a major piece of social science research, that could take up to a year to prepare and disseminate. This is the shortest, quickest methodology for attaining the simplest national poll possible. What I wrote above is not even a thick description of the survey and polling process using a national sample – it is more like the introduction to the research methods class.

Okay, that’s enough – now let’s take a look at what the New York Times writes in How Different Polls Work

First, we are told this:

"Pollsters have several methods to choose from when conducting a poll. Regardless of method, it's hard to get a representative sample of the population to answer survey questions, so most polls weight their response data to match the expected composition of the electorate."

The old weighted response line is kind of earthshaking if you ask me. I understand we have to be practical, but giving the pollster the luxury of weighting the poll as they see fit is a circular reasoning that seems downright medieval. In other words, we take a poll so we can know how people feel about the election. But, since we don’t know if we have a random and representative population sample, then we weight our results, which means we systematically bias and skew the results, so that the results of the poll come out the way we think they would have if we had executed the poll properly. Ouch! That doesn't sound like science, that sounds like quantum physics!

Next, we read:

"LiveTelephone Polls An interviewer asks questions of a respondent by telephone.Most telephone polls conducted by live interviewers include both landlines and cell phones. Currently, the CDC estimates that about half of U.S. households do no thave a landline."

Would you want to include landlines as part of a representative sample? Why or why not? What about smartphones? And, then there is the perpetual question: do you answer your phone and take surveys for perfect strangers? If you were a pollster, wouldn’t you almost want to exclude anyone who answered their phone and participated just because they must be so unusual in the first place?

Next we read:

"Online Polls Most online polls are based on panels of self-selected respondents. Internet access is not yet evenly distributed across socioeconomic and demographic groups."

That internet access line is a myth. Everyone is online. There is no culture on the streets anymore. Everyone has gone underground and everyone is online. The utility of online vs offline polling samples is going to be questioned more and more. Besides, they just told us landline usage is not evenly distributed. How do you know any better who responds to your poll online than on a telephone? If a great problem with poll-taking is audience saturation, then online polls benefit from audience enthusiasm. So, a poll done the old-fashioned way is a poll of relatively unenthusiastic people who answer their phones when perfect strangers call, and talk to them long enough to participate in a survey. No one that is busy at all will take the time to participate in a telephone survey – so just who are these people responding to a telephone survey call using their landline or smartphone with caller ID?

Finally,we read:

Interactive Voice Response Polls Interactive voice response (I.V.R.) polls (also known as“robo-polls” or “automated polls”) employ an automated, recorded voice to call respondents who are asked to answer questions by punching telephone keys. Anyone who can answer the phone and hit the buttons can be counted in the survey. Most I.V.R. polls call only landlines.

I am not even going to respond to this tactic. Any pollster who believes they are getting a random representative sample of anything by doing robo-calls to landlines is probably paying someone a lot of money to engineer the robo-call results in the first place. It’s 2016. Who in their right minds would be enthusiastic about making or receiving landline-robo-calls? And if people did want to respond to a robo-call, how do you know who is actually responding, and how do you know robo-responders are representative of most people?

*  * *  *  *

That ought to give you plenty of food for thought. The next time you hear results from election polls, you might see it a little differently. By the time you learn how research works, you cannot even evaluate a study without knowing all kinds of details about the research design. The world today moves so fast that the details get easily glossed.

One last thing –about margins of error. Often when you hear about a poll on the news they mention the margin of error. But, they never get this part right. The margin of error is the percentage of chance that the poll might be inaccurate or off a little bit. By now, that comes as no surprise to us. Typically, there is a 3% margin of error and they poll registered Democrats and registered Republicans. After the conventions, they begin polling likely voters, which is a little different, and there are sometimes margins of error of 4-5%.

For example, if Clinton has 46% and Trump has 40%, and the margin is 3%, then the TV newscasters say it is within the margin of error. In this case, Trump could have had 43% and Clinton could have had 43%, so with a 3% margin of error, any results within 6 points are a virtual tie. Most of the results have been in this range most of the election. However, when the polls are outside a 6-point spread, the newscasters never say the results are within the margin of error - mistake. They are always within the margin of error. Say Clinton has 50% and Trump has 40% -that still means that with a 3% margin of error, Clinton could have 47% and Trump could have 43% - so a 10-point spread might really be a 4-point spread.

In other words,there is always a margin of error, and there is always some wiggle room. These polls are far from perfect, yet when I see a 5% margin of error, that is completely worthless. Why even bother?

Now you know what I know, and you know what the New York Times knows. Now we all know that we all now how difficult it is to do Presidential election polls. Have fun making up your mind about election polls in this campaign year. My expert social psychologist’s point of view suggests that if the next poll favors our favorite candidate, we will think it is a sign; and if the next poll favors the other candidate, then we will smile and say ‘Nobody believes any of those polls anyway!”