But what if I don't want to select 2



  • I wish all multiple choice surveys were this easy...




  • Then they'll throw out your survey.  Survey writers put in things like this to be sure the respondent is actually reading the questions or directions, and not just inputting random answers.



  • I would likely select any number but 2 just to show them who's the boss. If they're putting "questions" like this into a survey intentionaly, then their survey is going to be worthless.



  • It's a sort of CAPTCHA. Of course, there's still a somewhat higher* than 20% chance that the bots will answer this correctly.

    * It takes no major effort for the small percentage** of bot developers that are smart to get around this kind of question.

    ** Disclaimer: I' ve never met a bot developer, smart or otherwise.



  • @SilentRunner said:

    I would likely select any number but 2 just to show them who's the boss. If they're putting "questions" like this into a survey intentionaly, then their survey is going to be worthless.

     

     Not if they also want to weed out snarky people who want to show them who's the boss, and who probably aren't representative of the customers they're trying to find out about anyway...



  • Actually we've seen this here before.



  • I had figured the question was a bot screener as well.  But this survey was also adaptive, asking several follow-up questions in relation to my initial Most/Least responses.  That seems like a more logical approach to weeding out bots and bored participants.

    I don't see the need for both and this question actually de-valued the survey in my opinion.



  • Boy.  This HOME DEPOT survey has way more examples of WTFery.  At least they specifically mention the question is for screening.



  • @jreasons68 said:

    I had figured the question was a bot screener as well.  But this survey was also adaptive, asking several follow-up questions in relation to my initial Most/Least responses.  That seems like a more logical approach to weeding out bots and bored participants.

    I don't see the need for both and this question actually de-valued the survey in my opinion.

    Oddly enough, they may not be trying to weed out bored participants, so long as those participants are at least paying enough attention to follow directions.  Also, they probably want the ratings much more than the follow-up questions: the ratings are easily tabulated by a computer; the follow-up questions require semantic analysis that computers are currently unable to do competently.  We will, of course, skip the question of whether the contractors they've hired to perform this analysis can do any better.  The point is, the contractors cost more per hour than the computers, and take more hours to do their portion of the review - unless only a very small, vocal subset of participants actually put in answers for the follow-up questions.

    Now, if there are significant differences in the ratings of the people who have answered the follow-up questions versus those who haven't, they have a problem.  Usually, however, there's few enough participants who answer follow-up questions in these sort of things for the standard deviation in their answers to be very high - and so there's unlikely to be a detectable significant difference...



  • Looks like a typical control question that is commonly used on surveys, particularly in marketing and psychology. The fact that anyone considers this a WTF is TRWTF.

    The purpose is to try to detect people/bots who answer randomly, people who answer in a deliberately uncooperative way, or people who answer inconsistently (although this particular question doesn't do the latter)


Log in to reply