So let's go over the poll, one more time.
Over 157 000 households were contacted. There were 13 000 respndents. They were weighted for geographic density-- in other words, they were from all over the country.
The automated dialer phoned these households and asked whether abortionist Henry Morgentaler should received the Order of Canada. They pressed "1" for "yes" and "2" for no.
Over 55% responded "no". In some provinces, the opposition was about 2/3 of the population.
Pretty straightforward, huh?
Does it matter who commissioned or conducted the poll? No. Hand over the same poll to any large polling firm, apply the same methodology, and 19 times out of 20, the poll will get results within the 1.5% margin of error.
Let's consider that for a moment - 157,000 thousand households, and 13,000 responded. Hmmm...can you say "self-selecting sample"? It might also explain a rash of telephone spam I received a few weeks ago, where the same semi-concealed number kept on calling me - at dinnertime. Getting back to the numbers, that's an 8% response rate. That's pretty low, and chances are the 8% that even bothered to respond would mostly be the same crowd that's been screaming ever since Morgentaler's OC was made public knowledge. The rest of the population doesn't give a damn.
The piece that makes me suspicious is the confidence interval - 1.5%, nineteen times out of twenty. I studied just enough of this kind of statistical analysis in University to be a little suspicious. 1.5% is pretty tight for this kind of statistic, a "19 times out of 20" is basically a 95% odds. The usual rule in such circumstances is that as the margin of error is reduced, the odds of that outcome start to drop off.
From a purely mathematical standpoint, I can pretty much draw whatever confidence interval I want from a large enough sample size. However, that's where you have to look at the sampling technique, as well as the questions asked in the survey.
Telephone solicitation of input is rapidly becoming a very weak strategy for pollsters to use. Not only do more and more people simply refuse to answer phone calls from numbers they don't recognize (I'm one of those), the days of public telephone directories are rapidly dwindling. More and more people have unlisted numbers, and the frustration with unwanted telephone spam is rapidly killing off people's polite willingness to even tolerate survey questions.
This means that KLR-VU's analysis needed to make correction for sample bias in their raw data.
A classic example of a biased sample and the misleading results it produced occurred in 1936. In the early days of opinion polling, the American Literary Digest magazine collected over two million postal surveys and predicted that the Republican candidate in the U.S. presidential election, Alf Landon, would beat the incumbent president, Franklin Roosevelt by a large margin. The result was the exact opposite.
This alone, along with a disturbingly simplistic question in the first place places this poll in the danger zone of being of limited or no real value. It tends to suggest that there are only two answers, and as we all know in politics, there are always more than two opinions. (By the way, this is something that has bothered me a great deal with Nanos Research polling as well in recent months)
Right now, like other polls that have traced back to the HarperCon$, this starts to feel like a 'push poll', something that we already know the Con$ have been engaging in.