As you may know, the IAB recently released a 40+ page report evaluating long form (multiple question) research methodologies for evaluating the effectiveness of advertising on the Internet. The conclusion – there are multiple issues with this approach which call into question the value and validity of the results of long form research. In other words, garbage in, garbage out.
Vizu was literally founded some years ago in part to address the very issues highlighted in this paper. As you may know, we strongly stress the importance of focusing on measuring and optimizing the primary marketing objective of any campaign by asking a single question around a brand funnel metric. Unlike long form research, this not only allows us to collect sufficient data in real-time to optimize campaigns, but just as importantly also helps to address many of the issues highlighted in this paper. As many of our competitors attempt to position our approach as a shortcoming, we wanted to take this opportunity to point out a few key findings from the paper, and contrast Vizu's approach to that of other vendors. A link to the full paper is included below, but as it can be a bit of a bear to read and covers a variety of topics, the following is a synopsis of the key findings – they're quite eye-opening. Please don't hesitate to contact us if you have any questions or would like further clarification.
The Vizu Team
Key Findings / Issues: IAB Whitepaper: An Evaluation of Methods Used to Assess the Effectiveness of Advertising on the Internet. (From Page 4)
According to this report, the validity of long-form Interactive Advertising Effectiveness (IAE) research is "threatened...by the extremely low response rates achieved in most IAE studies."
The paper states that typical response rates are often far less than 1%. If the non-responders differ materially in their propensity to be positively influenced by internet advertising from the responders (which the paper states could likely be the case), then IAE studies are at best significantly overestimating the positive impact of the advertising, and at worst drawing completely invalid conclusions.
Asking a single question that focuses on the primary marketing objective of a campaign generates response rates that are 50-100 times higher than traditional long-form custom surveys or short questionnaires that require multiple responses. This allows us to avoid the well known issues associated with "professional survey taker syndrome." For each additional question asked, response rates drop in the neighborhood of 65%. This quickly leads to issues with non-response bias.
According to this report, the validity of long-form IAE research is also called into question by "the near exclusive use of quasi-experimental research designs rather than classic experimental designs." Translated into English, the control and exposed respondent groups are not generated randomly and gathered in exactly the same manner, which makes it impossible to say with certainty that those groups are equivalent and therefore impossible to say with certainty that the advertising alone is driving the perceived impact.
Many IAE vendors rely on "nodes" installed on certain pages of publisher websites to recruit respondents. The respondents collected from the nodes, however, only reflect people visiting those particular pages, as opposed to the entire footprint of the campaign, leading to the issues identified in the report. A few vendors use the DART ad server to perform this segmentation. This of course requires that an advertiser be using DART. Even when this is the case, however, it does not address the issue. Firstly, it does not account for publisher-served ad impressions. Secondly, DART's Audience Segmentation functionality was not designed for this purpose, and is not capable of creating truly random assignment between the two groups, leading again to a quasi-experimental design.
Vizu's standard implementation of our Ad Catalyst solution employs a true classic experimental design. The control and exposed groups are collected in exactly the same manner from exactly the same audience, with individuals randomly assigned to each group. In so doing, we can ensure the validity of the comparison between the groups when calculating Brand Lift.
The report notes that the majority of IAE vendors, in an effort to address the first two issues, employ weighting adjustments, but then states: "There are no weighting adjustments that currently are made in IAE test/comparison group studies that provide any certain improvement in the validity of these studies."
The variables that typically are used in weighting IAE studies are gender, age, income, and internet usage. The paper states, however, that reliable data on these parameters for the specific target population of any given campaign is generally not available. Further, psychographic variables that are linked to the focus of each specific study are often far more important than the demographic variables, and this data is even harder to come by.
As Vizu's "single question" approach drastically reduces non-response bias, and we use a classic experimental design, we do not need to weight in an attempt to address these issues.