Selzer & Company, Inc.

Articles

WHEN SOMETHING ISN'T BETTER THAN NOTHING "IT'S NOT SCIENTIFIC, BUT . . ."
J. Ann Selzer, Ph.D.

How many times have you heard newspaper researchers share their frustration with call-in, write-in, or inserted surveys their newspapers insist on conducting? Even though it's not scientific, the justification goes, isn't it better than nothing?

I've found a new way of answering this question. My old way wasn't all that bad, but it was aimed mostly at advertising and marketing directors. They have a penchant for inserting surveys in the paper-often the single-copy editions-in order to get some idea of who buys the newspaper that way. They say their advertisers deserve to know something about that audience, and isn't something better than nothing?

In essence, I replied at a conference of ad directors a few years ago, they were asking if it was okay to gift-wrap a cockroach and give it to their advertisers, because at least it was a gift. And wasn't that better than nothing?

Thanks to a humbled researcher who shared her story, I've been able to illustrate just how expensive this kind of "free" research can be. Her newspaper conducted a single-copy "survey," inserting thousands of questionnaires into rack copies and getting a few hundred completed returns-a response rate in the low single digits. The finding: The majority of single copy buyers are older women.

What happened next was predictable, yet still horrifying. They mounted a marketing campaign with the objective of increasing the buying frequency among older women. A year later, they conducted some scientific research and were mortified to learn that most single copy buyers are not older women-in fact, most are male. They'd banked on unscientific research and spent their marketing budget targeting the wrong readers. Money down the drain.

The reality is that the majority of people who fill out surveys inserted in single copies of newspapers and send them in are older women. Their research, because it did not rely on scientific sampling, could not be generalized to the population they wished to study-single copy buyers. They figured that because they had hundreds of responses-as many as they might have gotten with a small scientific phone or mail survey-then their margin of error for their insert study would be acceptable.

But, there is no way to calculate margin of error on unscientific samples. The principal does not apply. There are no confidence levels to decide upon, and no way to estimate how far from the attitudes of the target population the findings might vary. Because it isn't science; therefore no scientific principles apply.

As I said, this tactic works reasonably well with marketing and advertising directors. But editors are another breed of fish. They will sometimes look for a way to compile data and present findings that are not culled from a representative sample. They cover themselves by dropping the word "unscientific" into the story from time to time, so that the reader won't think the newspaper doesn't know the difference between scientific and unscientific research.

A client newspaper of mine (whom I follow closely) reported on just such an "unscientific survey" of former residents of their state to find out why the population is declining. It was a major piece, with several charts and graphs. But no description of methodology to help the reader judge whether to believe the findings or not. Worse, the way the story was reported was almost identical to the way the paper would handle a scientific public opinion survey. In short, it looked like a duck and quacked like a duck. Would not most readers mistake it for a duck, I wondered?

Even though the newspaper disclaimed the research as "unscientific," publishing such data is unacceptable, I say. Imagine if a reporter produced an investigative story and had relied on an unreliable source or an unsubstantiated claim or a statement that might well be false. Also, imagine the reporter believed this to be okay because he/she had so noted that these elements were questionable in the text of the story. What would most editors do? Publish the story? I doubt it. Because that story may not be true and they know it.

So should they hesitate to publish the results of a "survey" that does not comply with the science of sampling. Even when you have responses from 500 or 1,000 people, if they were not drawn from a representative sample, the findings cannot be projected to any broader population. Again, there is no margin of error that can be calculated, no method to determine how accurate the findings might be at any level of confidence. In short, the findings may not be a true representation of the population to which the respondents belong.

Unfortunately, newspapers publish this sort of "information" all the time. Somehow, they think if they label the findings "unscientific," then it's okay to publish.

Our job as researchers is to help them see that publishing unreliable information does not serve their mission. Publishing nothing would be better than publishing misleading information. This is my new tact for enlightening editors to the unintended ill their unscientific surveys can do.

Published in NAA Research Federation Newsletter, January 2001

Download a PDF version of this article.

Selzer & Company, Inc.Selzer & Company, Inc.
Selzer & Company, Inc.

Articles

SELZER & COMPANY, INC.
308 Fifth Street
West Des Moines, Iowa 50265-4632
Phone: 515.271.5700
Case Studies Services Clients Media Appearances Articles Wacky Surveys Contact Us Home