They say the world is changing. Let’s check that out empirically.

We might run a couple of sample surveys, to see how people’s behaviors or attitudes change between the two questionnaire mailings. A colleague, however, suggests panel sampling. Which should we choose? If we go with a panel, *what kind* of panel will cost-effectively measure the trend?

Yes, these are leading questions, excuses for me to post a column about what my colleagues generously call “Phillips’ Law” of longitudinal sampling. People hear that and ask, “What the h*** is Phillips’ Law?” Now instead of sending them to the academic publications [1], I’ll be able to answer, “Look at the blog,”

Suppose we want to know the month-to-month trend in a company’s purchases of software licenses, or a household’s purchases of breakfast cereal. The January-February *trend* in purchase volume is simply the difference between the amount purchased in February and the amount purchased in January. Trend measurement necessarily involves a span of time – a before-and-after – and is an example of *longitudinal* measurement. We can contrast this kind of measurement with *cross-sectional* measurement, which involves only one point in time. (Examples of cross-sectional marketing queries include, “What brand did you buy today?”; “Whom did you vote for in the last election?”; and so on.)

The word *panel* refers to a research design in which respondents answer the same questions at least twice, with the reporting occasions separated by an interval of time. An *ideal panel* is one without attrition, i.e., one in which all respondents in the first wave of reporting also report in subsequent waves. The population of respondents in a panel with attrition can be *static* or *dynamic*. In a static panel there is no replacement of panel members who have dropped out after the first wave, whereas a dynamic panel is replenished in order to preserve the panel as a good representation of a particular population. The total replenished panel sample, including both original respondents and replacement respondents, is called the *summary sample*. As an alternative to panels, a trend might be measured by two or more independent, cross-sectional *surveys,* each with a different sample of respondents.

Operators of continuous, multi-client panels replenish their panels as a matter of course; they may then charge extra for the programming needed to “pull” a static sample. It is widely presumed that static samples are superior to summary samples for minimizing sampling error. I show below that this is not always true, and that research users (who often pay a premium to ensure a sizeable static sample) can save money by using appropriate summary samples.

To briefly review Stats 101, both surveys and panels are samples, subject to sampling errors of two kinds. Sample*bias arises when we sample a group of respondents that is not really representative of the target population of buyers. Sample*

*variation*happens when we sample representative respondents but not enough of them.The word

*bias*means the same thing in statistics as in ordinary language; a biased sample statistic is “a little bit off” its true value. Momentarily ignoring sample variation, we can write the measured value of the trend like this:

Measured January-February trend in purchase volume =

*(True volume in February + bias in February measurement) –*

*(True volume in January + bias in January measurement ) *

Panels have a clear advantage over repeated independent surveys, insofar as bias is concerned. In a panel that is ideal or nearly so, January bias pretty much equals February bias, and the bias terms in the above equation cancel out, leaving only the true trend. When two surveys use different sampling methodologies and/or measure completely different sets of respondents, the two bias terms differ, and the survey-measured trend is in error by an amount equal to “February bias minus January bias.”

Should we measure a trend using repeated independent samples, or a panel sample? Because bias is hard to estimate (most commercial researchers provide a speculative list of possible sources of bias, without estimating their magnitude, and call it a day), we will make decision rules based on the variation of the estimate, or more precisely, the mean square error. A key element of our decision is the *serial correlation *(sometimes called autocorrelation) of respondents’ behavior. The serial correlation reflects the tendency of individual buyers/respondents to repeat the same behavior in successive measurement periods.

If a market measure is reported in two independent surveys of size n taken in two periods, and this market measure has the same variance in both periods, then the mean square error of an observed trend in this measure is . When observations are taken on an ideal panel (i.e., no attrition) for two sufficiently short periods, and the serial correlation of individual behavior is , then the mean square error of the measured trend is

MSE (ideal panel) =

This formula implies a well-known proposition:

*For any positive , the MSE is less for an ideal panel than for repeated surveys.*

MSE (static portion) =

where R = proportion of non-dropouts (the "retention rate"), and n = size of the summary sample. R, of course, is the complement of the attrition rate A. (R = 1 – A.)

This implies a second proposition, which may be called a "law" because of the simple and unexpected nature of the result:

The static panel estimate is less variable than the survey estimate whenever > A, i.e., when the autocorrelation exceeds the attrition rate.

There are a number of simple proofs of this.

Because such simple relationships are rare in marketing (at least, useful ones are), and because this one seemingly had escaped researchers' notice, my colleagues kindly dubbed it “Phillips' Law of Longitudinal Sampling.” I haven't resisted.

It can be shown further that MSE (summary sample) = . Simple arithmetic then shows that

*The estimates from the summary sample*

*are less variable than those obtained from the static sample whenever*

*< 1/(1+R).*

In market research, this inequality holds for broad classes of consumer goods purchase behavior, leading to a conclusion that violates conventional wisdom: Summary samples are often preferable to static samples.

How can a researcher put these tools to use? The key is the autocorrelation statistic, and sadly, the exact value of this is not known. However, researchers can estimate the autocorrelation of behavior in the population of interest. Rough estimates can be gained by looking at other studies done on the same or similar behaviors or groups. Plug the estimated autocorrelation into the inequalities presented in this article; you will then be able to decide among independent surveys, static panels, and summary panels – and get the most accurate trend estimates for your budget.

Postscript: OK, this isn’t e = mc

^{2}. Not of earth-shaking importance. But, it’s very simple and pretty cool, useful to a small group of survey researchers, and I’m proud of it.

[1] Y. Wind and D.B. Learner, “On the Measurement of Purchase Data: Surveys

vs. Purchase Diaries.” *Journal of Marketing Research, *16 (February), 39-47;

B. Golany, F. Phillips, and J.J. Rousseau, "Optimal Design of Syndicated Panels.” *European Journal of Operations Research* 87 (1995) 148-165;

B. Golany, F. Phillips and J.J. Rousseau, “Few-Wave vs. Continuous Consumer Panels: Some Issues of Attrition, Variance and Bias.” *International Journal of Research in Marketing,* 8, 273-280, 1991.

## Comments