Difference between revisions of "Directory:Akahele/Survey says..."

MyWikiBiz, Author Your Legacy — Friday December 27, 2024
Jump to navigationJump to search
(Copied from Akahele.org)
 
(Standard format, om nom nom nom.)
 
(2 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Those old enough to remember the Carter and Reagan administrations are likely to have enjoyed the highly popular game show, <a title="Family Feud (funny clips)" href="http://www.youtube.com/watch?v=_oxt9e5B4bE" target="_blank"><em>Family Feud</em></a>, if not for the spectacle of two extended families competing against each other, then for the "play along at home" aspect of matching wits with those families, or (if anything) counting to see how many times host Richard Dawson would plant a (too often unwelcome) kiss (or two) on the lips of any female contestant.
+
==Survey says...==
 +
Those old enough to remember the Carter and Reagan administrations are likely to have enjoyed the highly popular game show, [http://www.youtube.com/watch?v=_oxt9e5B4bE ''Family Feud''], if not for the spectacle of two extended families competing against each other, then for the "play along at home" aspect of matching wits with those families, or (if anything) counting to see how many times host Richard Dawson would plant a (too often unwelcome) kiss (or two) on the lips of any female contestant.
  
<strong>A survey we trusted</strong>
+
===A survey we trusted===
  
The most intellectually viable aspect of <em>Family Feud</em> was the core of the program -- the response data from a survey of 100 people answering  questions that tend to cluster common answers:  "Name something you buy on every visit to the grocery store" or "Give a slang term for a policeman".
+
The most intellectually viable aspect of ''Family Feud'' was the core of the program -- the response data from a survey of 100 people answering  questions that tend to cluster common answers:  "Name something you buy on every visit to the grocery store" or "Give a slang term for a policeman".
<table style="float: right; border=" border="0" width="155">
+
[[Image:richard-dawson.jpg|thumb|right|Richard Dawson on Family Feud]]
<tbody>
+
 
<tr>
+
As a practitioner in the field of marketing research, I know darn well that a sample of 100 respondents ([http://sg.answers.yahoo.com/question/index;_ylt=Ana0kDXsKamqprcApIm6RfQh4wt.;_ylv=3?qid=20060606190510AAzZnok heaven knows] how they were selected for participation in the survey) is practically bunk.  But the methodology seemed to work out just fine for a family game show.  There were never any scandals or disputes centered on the answers to that survey.  We knew we were about to come face to face with a reliable-enough "fact" when Dawson would turn to that big board behind him and shout, "Survey says...!"
<td><img src="http://akahele.org/wp-content/uploads/2009/05/richard-dawson.jpg" alt="Richard Dawson on Family Feud" /></td>
 
</tr>
 
<tr>
 
<td class="photocaption" style="text-align: left;">Richard Dawson on Family Feud</td>
 
</tr>
 
</tbody></table>
 
As a practitioner in the field of marketing research, I know darn well that a sample of 100 respondents (<a title="Family Feud survey panel theory" href="http://sg.answers.yahoo.com/question/index;_ylt=Ana0kDXsKamqprcApIm6RfQh4wt.;_ylv=3?qid=20060606190510AAzZnok" target="_blank">heaven knows</a> how they were selected for participation in the survey) is practically bunk.  But the methodology seemed to work out just fine for a family game show.  There were never any scandals or disputes centered on the answers to that survey.  We knew we were about to come face to face with a reliable-enough "fact" when Dawson would turn to that big board behind him and shout, "Survey says...!"
 
  
Today, in the world of overnight web-panel-based consumer data collection, I'm not nearly as comfortable as I was at a young age with trusty Richard Dawson and his big, flashing incandescent board on <em>Family Feud</em>.
+
Today, in the world of overnight web-panel-based consumer data collection, I'm not nearly as comfortable as I was at a young age with trusty Richard Dawson and his big, flashing incandescent board on ''Family Feud''.
  
<strong>My experience with Internet surveys
+
===My experience with Internet surveys===
</strong>
 
  
 
I'm hardly new to the practice of conducting survey research via the Internet.  In fact, e-mail borne surveys were an important part of my business practice as far back as 1993 -- respondents would "edit" the reply e-mail text with their answers, send it back, and the software would detect the answers within the confines of pre-formatted response spaces within the e-mail text.  Crude in retrospect, but these techniques worked fairly well, especially when targeting a highly selective sample (such as the customer list of a business-class laser printer manufacturer).
 
I'm hardly new to the practice of conducting survey research via the Internet.  In fact, e-mail borne surveys were an important part of my business practice as far back as 1993 -- respondents would "edit" the reply e-mail text with their answers, send it back, and the software would detect the answers within the confines of pre-formatted response spaces within the e-mail text.  Crude in retrospect, but these techniques worked fairly well, especially when targeting a highly selective sample (such as the customer list of a business-class laser printer manufacturer).
  
About four or five years later, true web-based survey platforms were well established, but how to populate these questionnaires with <a title="Probability sampling" href="http://www.socialresearchmethods.net/kb/sampprob.php" target="_blank">representative, diverse respondents</a> was becoming a hot potato.  Everyone seemed to acknowledge that web panels attracted non-typical consumers, but the low cost of execution and speed of turn-around were just so damn tempting.  Of course, the major web panel vendors did their best to come up with various techniques (and white papers) that demonstrated ways to "balance" web samples, so that they might pass muster with executives on the client side.  But, remaining at the crux of all survey research and not just web-based sampling, is the question of self-selection bias.  People who willingly spend 15 minutes of their time to complete a questionnaire are not "normal", in the sense that they sometimes fail to represent the attitudes and behaviors of people who prefer not to spend their time that way.  It appears that, simply, this problem is accentuated among Internet populations.
+
About four or five years later, true web-based survey platforms were well established, but how to populate these questionnaires with [http://www.socialresearchmethods.net/kb/sampprob.php representative, diverse respondents] was becoming a hot potato.  Everyone seemed to acknowledge that web panels attracted non-typical consumers, but the low cost of execution and speed of turn-around were just so damn tempting.  Of course, the major web panel vendors did their best to come up with various techniques (and white papers) that demonstrated ways to "balance" web samples, so that they might pass muster with executives on the client side.  But, remaining at the crux of all survey research and not just web-based sampling, is the question of self-selection bias.  People who willingly spend 15 minutes of their time to complete a questionnaire are not "normal", in the sense that they sometimes fail to represent the attitudes and behaviors of people who prefer not to spend their time that way.  It appears that, simply, this problem is accentuated among Internet populations.
  
<strong>Losing faith
+
===Losing faith===
</strong>
 
  
Between about 2001 and the present day, I've gradually been losing faith in the entire premise of reliable Internet-sampled and Internet-fielded marketing research.  Last month, a presentation at the <a title="CTAM Research Conference" href="http://www.ctam.com/conferences/Research/index.html" target="_blank">CTAM Research Conference</a> in Washington, DC, practically sealed the deal for me.  <a title="Mktg, Inc." href="http://www.mktginc.com/ourteam.asp" target="_blank">Dr. Steven Gittelman</a> conducted a meta audit of 17 different U.S. web panels.  His research found that on nine of these panels, well over 15% of the participants were completing more than thirty Internet surveys per month.  Furthermore, on most U.S. panels, anywhere from 40% to 55% of members are also enrolled in at least <strong>four other</strong> survey research panels!
+
Between about 2001 and the present day, I've gradually been losing faith in the entire premise of reliable Internet-sampled and Internet-fielded marketing research.  Last month, a presentation at the [http://www.ctam.com/conferences/Research/index.html CTAM Research Conference] in Washington, DC, practically sealed the deal for me.  [http://www.mktginc.com/ourteam.asp Dr. Steven Gittelman] conducted a meta audit of 17 different U.S. web panels.  His research found that on nine of these panels, well over 15% of the participants were completing more than thirty Internet surveys per month.  Furthermore, on most U.S. panels, anywhere from 40% to 55% of members are also enrolled in at least '''four other''' survey research panels!
 
<div>
 
<div>
  
<strong>Things that make you go, "Hmm..."
+
===Things that make you go, "Hmm..."===
</strong>
 
  
 
My research team recently fielded a quick online survey with a San Diego vendor I implicitly trust to have one of the best panels in the online research business.  The sampling was intended to be nationally representative of Internet households who had either cut wire-line telephone service in the past 12 months, or were strongly intending to do so in the next 12 months, and guess what?  It’s rather clear that a lot of respondents weren’t paying attention by the end of the survey:  nearly 32% of the respondents said they were Hispanic or Latino.  There is no way that's a true statistic, especially considering how Hispanics under-index for Internet penetration and English fluency.
 
My research team recently fielded a quick online survey with a San Diego vendor I implicitly trust to have one of the best panels in the online research business.  The sampling was intended to be nationally representative of Internet households who had either cut wire-line telephone service in the past 12 months, or were strongly intending to do so in the next 12 months, and guess what?  It’s rather clear that a lot of respondents weren’t paying attention by the end of the survey:  nearly 32% of the respondents said they were Hispanic or Latino.  There is no way that's a true statistic, especially considering how Hispanics under-index for Internet penetration and English fluency.
Line 37: Line 28:
 
Granted, some of this particular over-reporting was due to the way the question was asked (in a format usually intended for a telephone survey, where I’m sure the live interviewer does a better job of getting the right answer):
 
Granted, some of this particular over-reporting was due to the way the question was asked (in a format usually intended for a telephone survey, where I’m sure the live interviewer does a better job of getting the right answer):
  
<span style="color: #008000;"><em>To ensure proper ethnic representation, please answer; are you of Hispanic or Latino ethnicity or background?</em></span>
+
;<span style="color: #008000;">''To ensure proper ethnic representation, please answer; are you of Hispanic or Latino ethnicity or background?''</span>
<div><span style="color: #008000;"><em>1      Yes (white Hispanic)
+
:<span style="color: #008000;">''1      Yes (white Hispanic)''</span>
2      Yes (non-white Hispanic)
+
:<span style="color: #008000;">''2      Yes (non-white Hispanic)''</span>
3      No
+
:<span style="color: #008000;">''3      No''</span>
R      Prefer not to say</em></span></div>
+
:<span style="color: #008000;">''R      Prefer not to say''</span>
 
My guess is that a significant number of white non-Hispanics and black non-Hispanics selected punch 1 and punch 2, semi-consciously reacting to the words “white” and “non-white” to inform their response, rather than the question text itself.
 
My guess is that a significant number of white non-Hispanics and black non-Hispanics selected punch 1 and punch 2, semi-consciously reacting to the words “white” and “non-white” to inform their response, rather than the question text itself.
  
 
In another recent study, we sampled digital cable customers who subscribe to a monthly DVD rental service.  The hyper-inflated findings about this sample concluded:
 
In another recent study, we sampled digital cable customers who subscribe to a monthly DVD rental service.  The hyper-inflated findings about this sample concluded:
<div>
+
 
<ul>
+
*More than 85% said they subscribe to high-definition television programming
    <li><span style="font-size: small;">More than 85% said they subscribe to high-definition television programming</span></li>
+
*56% said they have a home theater
    <li><span style="font-size: small;">56% said they have a home theater</span></li>
+
*Over 71% said they have either a video game device or a DVD player connected to the Internet
    <li><span style="font-size: small;">Over 71% said they have either a video game device or a DVD player connected to the Internet</span></li>
+
*Even more (72%) said they use a media center PC to watch video on their TV set
    <li><span style="font-size: small;">Even more (72%) said they use a media center PC to watch video on their TV set</span></li>
 
</ul>
 
</div>
 
<div>
 
  
 
Yeah, right.  Maybe if the respondents are time travelers, reporting back to us their household characteristics from the year 2019.  Why do we tolerate "findings" like these?  In a word, because the data can be collected quickly and cost-efficiently, and (thankfully) these behavioral measures were not a key objective of what was essentially an attitudinal survey.</div>
 
Yeah, right.  Maybe if the respondents are time travelers, reporting back to us their household characteristics from the year 2019.  Why do we tolerate "findings" like these?  In a word, because the data can be collected quickly and cost-efficiently, and (thankfully) these behavioral measures were not a key objective of what was essentially an attitudinal survey.</div>
<strong>Setting the trap
 
</strong>
 
  
Over the past year, I have taken to using a simple technique to "trap" respondents who are not paying attention to (or lying about) survey questions.  By adding "tripwire" questions to the beginning of a survey, I am able to diagnose respondents who are more likely blithely clicking check-boxes ("<a title="Jon Krosnick on satisficing in surveys" href="http://www3.interscience.wiley.com/journal/112415330/abstract?CRETRY=1&amp;SRETRY=0" target="_blank">satisficing</a>" a questionnaire) than actually paying attention.  I provide a list of relatively uncommon products or experiences, then terminate from the survey anyone who answers that an <em>extremely</em> unlikely number of these items apply to them -- that is, it's far more likely the respondent is lazily or deceptively completing the questionnaire than it is that they are attentively and truthfully responding.  Some examples may help illustrate the principle.
+
===Setting the trap===
 +
 
 +
Over the past year, I have taken to using a simple technique to "trap" respondents who are not paying attention to (or lying about) survey questions.  By adding "tripwire" questions to the beginning of a survey, I am able to diagnose respondents who are more likely blithely clicking check-boxes ("[http://www3.interscience.wiley.com/journal/112415330/abstract?CRETRY=1&amp;SRETRY=0 satisficing]" a questionnaire) than actually paying attention.  I provide a list of relatively uncommon products or experiences, then terminate from the survey anyone who answers that an ''extremely'' unlikely number of these items apply to them -- that is, it's far more likely the respondent is lazily or deceptively completing the questionnaire than it is that they are attentively and truthfully responding.  Some examples may help illustrate the principle.
  
 
In a recent survey, I asked which of the following items were in the respondent's home, and these were the results:
 
In a recent survey, I asked which of the following items were in the respondent's home, and these were the results:
Line 65: Line 52:
 
<tbody>
 
<tbody>
 
<tr style="height: 13.5pt;" height="18">
 
<tr style="height: 13.5pt;" height="18">
<td class="xl26" style="height: 13.5pt; width: 193pt;" width="257" height="18"><strong>PRESENT  IN HOUSEHOLD</strong></td>
+
<td class="xl26" style="height: 13.5pt; width: 193pt;" width="257" height="18">'''PRESENT  IN HOUSEHOLD'''</td>
<td class="xl26" style="border-left: medium none; width: 48pt;" width="64"><strong>N=3258</strong></td>
+
<td class="xl26" style="border-left: medium none; width: 48pt;" width="64">'''N=3258'''</td>
 
</tr>
 
</tr>
 
<tr style="height: 13.5pt;" height="18">
 
<tr style="height: 13.5pt;" height="18">
Line 105: Line 92:
 
</tr>
 
</tr>
 
</tbody></table>
 
</tbody></table>
Never mind that as of February 2007, only about <a title="Scientific American on Segway" href="http://www.scientificamerican.com/article.cfm?id=power-walker" target="_blank">24,000 Segway units</a> had ever been sold, and many of them to corporate and law enforcement clients, not residential households.  So, we may choose between lazy and/or lying survey respondents (1.6 million), or we have realistic transactional data to guide us (24,000).
+
Never mind that as of February 2007, only about [http://www.scientificamerican.com/article.cfm?id=power-walker 24,000 Segway units] had ever been sold, and many of them to corporate and law enforcement clients, not residential households.  So, we may choose between lazy and/or lying survey respondents (1.6 million), or we have realistic transactional data to guide us (24,000).
  
 
Do you see my frustration with web-based data collection?
 
Do you see my frustration with web-based data collection?
Line 113: Line 100:
 
<tbody>
 
<tbody>
 
<tr style="height: 13.5pt;" height="18">
 
<tr style="height: 13.5pt;" height="18">
<td class="xl28" style="border-color: -moz-use-text-color black -moz-use-text-color -moz-use-text-color; height: 13.5pt; width: 193pt;" width="257" height="18"><strong>PARTICIPATION LAST 3 MONTHS</strong></td>
+
<td class="xl28" style="border-color: -moz-use-text-color black -moz-use-text-color -moz-use-text-color; height: 13.5pt; width: 193pt;" width="257" height="18">'''PARTICIPATION LAST 3 MONTHS'''</td>
<td class="xl29" style="border-left: medium none; width: 48pt;" width="64"><strong>N=504</strong></td>
+
<td class="xl29" style="border-left: medium none; width: 48pt;" width="64">'''N=504'''</td>
 
</tr>
 
</tr>
 
<tr style="height: 13.5pt;" height="18">
 
<tr style="height: 13.5pt;" height="18">
Line 147: Line 134:
 
On this panel, we terminated any who affirmed at least 4 of these items -- a near impossibility.  What is the likelihood, for example, of a person selected at random who is on unemployment, stayed in a Ramada Inn, rolls in a bowling league, and coaches a youth baseball or soccer team?  But, we "caught" four such respondents out of 504.  This nearly impossible configuration would pro-rate to being true for about 1,785,700 Americans.  That is, 4 divided by 504, times about 225,000,000 adults.
 
On this panel, we terminated any who affirmed at least 4 of these items -- a near impossibility.  What is the likelihood, for example, of a person selected at random who is on unemployment, stayed in a Ramada Inn, rolls in a bowling league, and coaches a youth baseball or soccer team?  But, we "caught" four such respondents out of 504.  This nearly impossible configuration would pro-rate to being true for about 1,785,700 Americans.  That is, 4 divided by 504, times about 225,000,000 adults.
  
This same data shows that 2.4% of adults are in a bowling league within the past three months, or 5.4 million adults.  This is about two times the known count of adults <em>and</em> children (combined) participating annually in a bowling league, <a title="2.3 million league bowlers" href="http://www.bowl.com/usbowler/about.aspx" target="_blank">according to the USBC</a>.  From corporate reports, I estimate that Ramada has about 50,000 rooms in the United States.  Over three months, that's about 4.5 million room-nights possible.  According to the above survey screener, 6.7 million adults have stayed in a Ramada room at some point in the past 3 months.  Even with 2 adults per room, that's an amazing occupancy rate -- Monday through Sunday, every week of the past three months, if we are to believe this sample.  I conclude that we cannot believe the sample.  The duplicate bridge stat is interesting -- web panels skew younger, and bridge skews older.  According to the ACBL, there are about 11 million people in the U.S. who play <a title="ACBL study (1986)" href="http://homepage.mac.com/bridgeguys/pdf/Newspaper/RecreationSpecialization.pdf" target="_blank">contract bridge</a>.  According to our screener, though, it's only 2.25 million -- under-reported by a factor of perhaps five.
+
This same data shows that 2.4% of adults are in a bowling league within the past three months, or 5.4 million adults.  This is about two times the known count of adults ''and'' children (combined) participating annually in a bowling league, [http://www.bowl.com/usbowler/about.aspx according to the USBC].  From corporate reports, I estimate that Ramada has about 50,000 rooms in the United States.  Over three months, that's about 4.5 million room-nights possible.  According to the above survey screener, 6.7 million adults have stayed in a Ramada room at some point in the past 3 months.  Even with 2 adults per room, that's an amazing occupancy rate -- Monday through Sunday, every week of the past three months, if we are to believe this sample.  I conclude that we cannot believe the sample.  The duplicate bridge stat is interesting -- web panels skew younger, and bridge skews older.  According to the ACBL, there are about 11 million people in the U.S. who play [http://homepage.mac.com/bridgeguys/pdf/Newspaper/RecreationSpecialization.pdf contract bridge].  According to our screener, though, it's only 2.25 million -- under-reported by a factor of perhaps five.
  
<strong>Can they pass the test?</strong>
+
===Can they pass the test?===
  
 
When showing respondents a description of a new product or service concept (sometimes even with an informative video clip), we've taken to the habit of giving the respondents a short, three-question "true or false" quiz about the concept they've just read about (and/or watched).  These are not very difficult questions for a sentient, attentive person of even less-than-average IQ to answer.  Consistently, we are finding that between 20% and 35% of respondents will fail this quiz that immediately follows presentation of the concept.  My conclusion:  perhaps a third of web survey respondents aren't paying any attention to the communications we're putting before them in surveys.
 
When showing respondents a description of a new product or service concept (sometimes even with an informative video clip), we've taken to the habit of giving the respondents a short, three-question "true or false" quiz about the concept they've just read about (and/or watched).  These are not very difficult questions for a sentient, attentive person of even less-than-average IQ to answer.  Consistently, we are finding that between 20% and 35% of respondents will fail this quiz that immediately follows presentation of the concept.  My conclusion:  perhaps a third of web survey respondents aren't paying any attention to the communications we're putting before them in surveys.
  
<em>Akahele</em> is presenting you data, both anecdotal and quantitative, each and every week.  What conclusions are you drawing about the key theme of <strong>trust </strong>and the<strong> Internet</strong>?  We look forward to your joining us with personal comments below.
+
''Akahele'' is presenting you data, both anecdotal and quantitative, each and every week.  What conclusions are you drawing about the key theme of '''trust '''and the''' Internet'''?  We look forward to your joining us with personal comments below.
 +
 
 +
===Image credits===
 +
*Richard Dawson (Mark Goodson-Bill Todman Productions), [http://www.copyright.gov/title17/92chap1.html#107 fair use doctrine</span>].
 +
*Segway personal transporter, [http://www.copyright.gov/title17/92chap1.html#107 fair use doctrine].
 +
 
 +
==Comments==
 +
7 Responses to “Survey says…”
  
<strong>Image credits:</strong>
+
;Kato
<ul>
+
:Interesting piece.
    <li><span style="color: #000000;">Richard Dawson (Mark Goodson-Bill Todman Productions), </span><span style="color: #000000;"><a title="Fair use" href="http://www.copyright.gov/title17/92chap1.html#107" target="_blank"><span class="comment">fair use doctrine</span></a>.</span></li>
+
:It has become pretty clear lately that internet polling is a sham, yet in the UK at least, vital policy discussions are still being guided by polling sites like YouGov, which are open to all kinds of manipulation.
    <li><span style="color: #000000;">Segway personal transporter, </span><span style="color: #000000;"><a title="Fair use" href="http://www.copyright.gov/title17/92chap1.html#107" target="_blank"><span class="comment">fair use doctrine</span></a>.</span></li>
+
:This is another example, like Wikipedia, where reality does not match the touted claims. Snake oil salesmen are creaming massive profits by extolling the virtues of these flawed ventures.
</ul>
+
;Dan T.     
</div>
+
:I’m on some of those Internet survey panels myself; perhaps I even answered some of the surveys you commissioned (some of the questions above sound vaguely familiar).  Sometimes the surveys ask weird stuff making me wonder just what the point of a survey is; your commentary gives me more background on that.
 +
:They can be pretty annoying with their repetitive questions; I’m sick of constantly getting asked my age, sex, zip code, and education level even though those are already on file in my record, and sometimes the same survey will ask those demographic questions more than once (it’s pretty common for a survey to ask my age at the beginning, then my birthdate at the end).
 +
:If a survey is too long (with lots and lots of questions about stuff I don’t give a flip about, like asking me a long series of questions of what I think of the difference between different brands of salty chips, their taste, their commercials, whether a particular brand gives “an impression of wholesomeness” or is one I “feel good about letting my kids eat” (I don’t actually have any kids), eventually I get to a point where I just want to get the darn thing over with so I’m not so careful in reading and answering the questions, perhaps producing some of the phenomena you see.  On the other hand, I do often try to diligently answer questions even if it requires an annoying amount of digging through stuff like receipts that show, to the nearest dollar, how much I spent for my last tank of gas or printer ink cartridge (I’m fortunately enough of a packrat to usually have those receipts even a few weeks later when the survey is asked; I imagine most others, who threw away the receipt, just give the survey-takers a guesstimate off the top of their head.)
 +
:Am I breaking their rules where they keep reminding me that one condition of participating in their surveys is to never tell anybody else about what they ask in their surveys?  (But then they keep sending me stuff branded with their name as bonus prizes, meaning that if I actually use it, people may notice that I’m a member of that survey panel and ask me about it.)
 +
;PJ     
 +
:What a great discourse on the issue. In the face of how much data (and common sense) point to the likely invalidity of much of online poll research, the extent to which some people don’t really care about the validity of the data is disappointing.  But in reality, the low cost and quicker execution are admittedly compelling incentives not to care. Your trap questions are a great way to try to separate the good from the bad and ugly.
 +
;RFK     
 +
:I was about to say that I have been a participant in not just four, but five of the activities mentioned. But then I realized that you said ‘in the last 3 months’. Perhaps some responders were overlooking that requirement as well.
 +
:Please be advised that duplicate bridge is just one style of contract bridge. There are many contract bridge players who do not play duplicate bridge.
 +
:I participate in online surveys to rate my latest restaurant meal. I dare say that I have not been honest by saying a manager stopped by my table when, in fact, a manager was nowhere in sight.
 +
;Gregory Kohs     
 +
:@Dan:  I suppose you are breaking rules about non-disclosure, but (like the GFDL license and Wikipedia) I have to also suppose that very few entities who issue content under such terms actually expect that the terms will be followed to the letter by everyone subject to the terms.
 +
:@RFK: What are you, some kind of bridge director or something?
 +
;RFK     
 +
:There is always room for humor – even on AKAHELE. I don’t have many answers but I enjoy browsing and searching. Count me as a regular AKAHELE reader.
 +
;Sarge     
 +
:I am not an active internet survey participant, but had to laugh a little at myself while reading this, because I do have a bread-making machine in my home.  It was given to me by my somewhat senile grandmother a few years back as a housewarming gift. I certainly do not see myself as the sort who would fit the demographic of a stereotypical bread-making machine owner (if there is even such a thing), but if I ever did run across that question on a survey, I would have to answer it honestly!
 +
:Very well written.  I thoroughly enjoy all the content on Akahele thus far, I am glad to have stumbled onto this site, it has been refreshing and thought provoking.

Latest revision as of 21:37, 24 October 2010

Survey says...

Those old enough to remember the Carter and Reagan administrations are likely to have enjoyed the highly popular game show, Family Feud, if not for the spectacle of two extended families competing against each other, then for the "play along at home" aspect of matching wits with those families, or (if anything) counting to see how many times host Richard Dawson would plant a (too often unwelcome) kiss (or two) on the lips of any female contestant.

A survey we trusted

The most intellectually viable aspect of Family Feud was the core of the program -- the response data from a survey of 100 people answering questions that tend to cluster common answers: "Name something you buy on every visit to the grocery store" or "Give a slang term for a policeman".

File:Richard-dawson.jpg
Richard Dawson on Family Feud

As a practitioner in the field of marketing research, I know darn well that a sample of 100 respondents (heaven knows how they were selected for participation in the survey) is practically bunk. But the methodology seemed to work out just fine for a family game show. There were never any scandals or disputes centered on the answers to that survey. We knew we were about to come face to face with a reliable-enough "fact" when Dawson would turn to that big board behind him and shout, "Survey says...!"

Today, in the world of overnight web-panel-based consumer data collection, I'm not nearly as comfortable as I was at a young age with trusty Richard Dawson and his big, flashing incandescent board on Family Feud.

My experience with Internet surveys

I'm hardly new to the practice of conducting survey research via the Internet. In fact, e-mail borne surveys were an important part of my business practice as far back as 1993 -- respondents would "edit" the reply e-mail text with their answers, send it back, and the software would detect the answers within the confines of pre-formatted response spaces within the e-mail text. Crude in retrospect, but these techniques worked fairly well, especially when targeting a highly selective sample (such as the customer list of a business-class laser printer manufacturer).

About four or five years later, true web-based survey platforms were well established, but how to populate these questionnaires with representative, diverse respondents was becoming a hot potato. Everyone seemed to acknowledge that web panels attracted non-typical consumers, but the low cost of execution and speed of turn-around were just so damn tempting. Of course, the major web panel vendors did their best to come up with various techniques (and white papers) that demonstrated ways to "balance" web samples, so that they might pass muster with executives on the client side. But, remaining at the crux of all survey research and not just web-based sampling, is the question of self-selection bias. People who willingly spend 15 minutes of their time to complete a questionnaire are not "normal", in the sense that they sometimes fail to represent the attitudes and behaviors of people who prefer not to spend their time that way. It appears that, simply, this problem is accentuated among Internet populations.

Losing faith

Between about 2001 and the present day, I've gradually been losing faith in the entire premise of reliable Internet-sampled and Internet-fielded marketing research. Last month, a presentation at the CTAM Research Conference in Washington, DC, practically sealed the deal for me. Dr. Steven Gittelman conducted a meta audit of 17 different U.S. web panels. His research found that on nine of these panels, well over 15% of the participants were completing more than thirty Internet surveys per month. Furthermore, on most U.S. panels, anywhere from 40% to 55% of members are also enrolled in at least four other survey research panels!

Things that make you go, "Hmm..."

My research team recently fielded a quick online survey with a San Diego vendor I implicitly trust to have one of the best panels in the online research business. The sampling was intended to be nationally representative of Internet households who had either cut wire-line telephone service in the past 12 months, or were strongly intending to do so in the next 12 months, and guess what? It’s rather clear that a lot of respondents weren’t paying attention by the end of the survey: nearly 32% of the respondents said they were Hispanic or Latino. There is no way that's a true statistic, especially considering how Hispanics under-index for Internet penetration and English fluency.

Granted, some of this particular over-reporting was due to the way the question was asked (in a format usually intended for a telephone survey, where I’m sure the live interviewer does a better job of getting the right answer):

To ensure proper ethnic representation, please answer; are you of Hispanic or Latino ethnicity or background?
1 Yes (white Hispanic)
2 Yes (non-white Hispanic)
3 No
R Prefer not to say

My guess is that a significant number of white non-Hispanics and black non-Hispanics selected punch 1 and punch 2, semi-consciously reacting to the words “white” and “non-white” to inform their response, rather than the question text itself.

In another recent study, we sampled digital cable customers who subscribe to a monthly DVD rental service. The hyper-inflated findings about this sample concluded:

  • More than 85% said they subscribe to high-definition television programming
  • 56% said they have a home theater
  • Over 71% said they have either a video game device or a DVD player connected to the Internet
  • Even more (72%) said they use a media center PC to watch video on their TV set
Yeah, right. Maybe if the respondents are time travelers, reporting back to us their household characteristics from the year 2019. Why do we tolerate "findings" like these? In a word, because the data can be collected quickly and cost-efficiently, and (thankfully) these behavioral measures were not a key objective of what was essentially an attitudinal survey.

Setting the trap

Over the past year, I have taken to using a simple technique to "trap" respondents who are not paying attention to (or lying about) survey questions. By adding "tripwire" questions to the beginning of a survey, I am able to diagnose respondents who are more likely blithely clicking check-boxes ("satisficing" a questionnaire) than actually paying attention. I provide a list of relatively uncommon products or experiences, then terminate from the survey anyone who answers that an extremely unlikely number of these items apply to them -- that is, it's far more likely the respondent is lazily or deceptively completing the questionnaire than it is that they are attentively and truthfully responding. Some examples may help illustrate the principle.

In a recent survey, I asked which of the following items were in the respondent's home, and these were the results:

<col style="width: 193pt;" width="257"></col> <col style="width: 48pt;" width="64"></col> <tbody> </tbody>
PRESENT IN HOUSEHOLD N=3258
Carbon monoxide detector 37.3%
Bread-making machine 24.8%
Installed home security system 22.0%
Locked gun cabinet 11.8%
Jet Ski / Sea Doo personal watercraft 2.8%
Segway personal transporter 1.4%

We terminated the 160 individuals (5% of all candidates) who said that they had four or more of these items in their home. Even so, that still leaves at least one in five of the homes in our sample saying they have a bread-making machine. Is that even plausible?

There are about 114 million households in the United States. If 1.4% of them own a Segway, that means this particular web survey suggests there are about 1.6 million Segway units dispersed across America.

<tbody> </tbody>
<img src="segway.jpg" alt="Segway personal transporter" />
One of the 1.6 million Segway owners?

Never mind that as of February 2007, only about 24,000 Segway units had ever been sold, and many of them to corporate and law enforcement clients, not residential households. So, we may choose between lazy and/or lying survey respondents (1.6 million), or we have realistic transactional data to guide us (24,000).

Do you see my frustration with web-based data collection?

Here is another example, where we simply terminated anyone who answered "yes" to four or more of a list of items. In this study, we targeted adult householders in our market footprint (which covers about 40% of the nation), with at least a working television set, and we asked 504 possible respondents about their participation in the past 3 months in any of the following:

<col style="width: 193pt;" width="257"></col> <col style="width: 48pt;" width="64"></col> <tbody> </tbody>
PARTICIPATION LAST 3 MONTHS N=504
Collected unemployment check 9.7%
Stayed in a Ramada Inn 3.0%
Coached a youth baseball or soccer game 2.4%
Participated in bowling league 2.4%
Played duplicate bridge 1.0%
Traveled to Africa 0.6%
Traveled to Australia 0.2%

On this panel, we terminated any who affirmed at least 4 of these items -- a near impossibility. What is the likelihood, for example, of a person selected at random who is on unemployment, stayed in a Ramada Inn, rolls in a bowling league, and coaches a youth baseball or soccer team? But, we "caught" four such respondents out of 504. This nearly impossible configuration would pro-rate to being true for about 1,785,700 Americans. That is, 4 divided by 504, times about 225,000,000 adults.

This same data shows that 2.4% of adults are in a bowling league within the past three months, or 5.4 million adults. This is about two times the known count of adults and children (combined) participating annually in a bowling league, according to the USBC. From corporate reports, I estimate that Ramada has about 50,000 rooms in the United States. Over three months, that's about 4.5 million room-nights possible. According to the above survey screener, 6.7 million adults have stayed in a Ramada room at some point in the past 3 months. Even with 2 adults per room, that's an amazing occupancy rate -- Monday through Sunday, every week of the past three months, if we are to believe this sample. I conclude that we cannot believe the sample. The duplicate bridge stat is interesting -- web panels skew younger, and bridge skews older. According to the ACBL, there are about 11 million people in the U.S. who play contract bridge. According to our screener, though, it's only 2.25 million -- under-reported by a factor of perhaps five.

Can they pass the test?

When showing respondents a description of a new product or service concept (sometimes even with an informative video clip), we've taken to the habit of giving the respondents a short, three-question "true or false" quiz about the concept they've just read about (and/or watched). These are not very difficult questions for a sentient, attentive person of even less-than-average IQ to answer. Consistently, we are finding that between 20% and 35% of respondents will fail this quiz that immediately follows presentation of the concept. My conclusion: perhaps a third of web survey respondents aren't paying any attention to the communications we're putting before them in surveys.

Akahele is presenting you data, both anecdotal and quantitative, each and every week. What conclusions are you drawing about the key theme of trust and the Internet? We look forward to your joining us with personal comments below.

Image credits

Comments

7 Responses to “Survey says…”

Kato
Interesting piece.
It has become pretty clear lately that internet polling is a sham, yet in the UK at least, vital policy discussions are still being guided by polling sites like YouGov, which are open to all kinds of manipulation.
This is another example, like Wikipedia, where reality does not match the touted claims. Snake oil salesmen are creaming massive profits by extolling the virtues of these flawed ventures.
Dan T.
I’m on some of those Internet survey panels myself; perhaps I even answered some of the surveys you commissioned (some of the questions above sound vaguely familiar). Sometimes the surveys ask weird stuff making me wonder just what the point of a survey is; your commentary gives me more background on that.
They can be pretty annoying with their repetitive questions; I’m sick of constantly getting asked my age, sex, zip code, and education level even though those are already on file in my record, and sometimes the same survey will ask those demographic questions more than once (it’s pretty common for a survey to ask my age at the beginning, then my birthdate at the end).
If a survey is too long (with lots and lots of questions about stuff I don’t give a flip about, like asking me a long series of questions of what I think of the difference between different brands of salty chips, their taste, their commercials, whether a particular brand gives “an impression of wholesomeness” or is one I “feel good about letting my kids eat” (I don’t actually have any kids), eventually I get to a point where I just want to get the darn thing over with so I’m not so careful in reading and answering the questions, perhaps producing some of the phenomena you see. On the other hand, I do often try to diligently answer questions even if it requires an annoying amount of digging through stuff like receipts that show, to the nearest dollar, how much I spent for my last tank of gas or printer ink cartridge (I’m fortunately enough of a packrat to usually have those receipts even a few weeks later when the survey is asked; I imagine most others, who threw away the receipt, just give the survey-takers a guesstimate off the top of their head.)
Am I breaking their rules where they keep reminding me that one condition of participating in their surveys is to never tell anybody else about what they ask in their surveys? (But then they keep sending me stuff branded with their name as bonus prizes, meaning that if I actually use it, people may notice that I’m a member of that survey panel and ask me about it.)
PJ
What a great discourse on the issue. In the face of how much data (and common sense) point to the likely invalidity of much of online poll research, the extent to which some people don’t really care about the validity of the data is disappointing. But in reality, the low cost and quicker execution are admittedly compelling incentives not to care. Your trap questions are a great way to try to separate the good from the bad and ugly.
RFK
I was about to say that I have been a participant in not just four, but five of the activities mentioned. But then I realized that you said ‘in the last 3 months’. Perhaps some responders were overlooking that requirement as well.
Please be advised that duplicate bridge is just one style of contract bridge. There are many contract bridge players who do not play duplicate bridge.
I participate in online surveys to rate my latest restaurant meal. I dare say that I have not been honest by saying a manager stopped by my table when, in fact, a manager was nowhere in sight.
Gregory Kohs
@Dan: I suppose you are breaking rules about non-disclosure, but (like the GFDL license and Wikipedia) I have to also suppose that very few entities who issue content under such terms actually expect that the terms will be followed to the letter by everyone subject to the terms.
@RFK: What are you, some kind of bridge director or something?
RFK
There is always room for humor – even on AKAHELE. I don’t have many answers but I enjoy browsing and searching. Count me as a regular AKAHELE reader.
Sarge
I am not an active internet survey participant, but had to laugh a little at myself while reading this, because I do have a bread-making machine in my home. It was given to me by my somewhat senile grandmother a few years back as a housewarming gift. I certainly do not see myself as the sort who would fit the demographic of a stereotypical bread-making machine owner (if there is even such a thing), but if I ever did run across that question on a survey, I would have to answer it honestly!
Very well written. I thoroughly enjoy all the content on Akahele thus far, I am glad to have stumbled onto this site, it has been refreshing and thought provoking.