How many Google Consumer Surveys are being taken by Google, just to prove that #gsurveys actually work?
One concern any researcher would have with Google Consumer Surveys (#gsurveys), which Google needs to address proactively, is how many of the responses to any question are just “junk clicks” — people clicking to get through the “research wall” to access the content they are after, without even bothering to read the question. In a world where there are a lot of junk clicks, the data gathered in using Google Consumer Surveys (#gsurveys) would be worthless. More practically: in a world where there are a lot of junk clicks, the first answer choice should have too many clicks, one could argue.
Here is a question making the #gsurveys rounds that, given enough respondents, should result in 50% choosing the first answer and 50% choosing the second answer.
We can think of two possibilities:
- This research is being conducted by Google, to help Google validate #gurveys, and to demonstrate to potential DIY customers, that there are at best a trivial number of junk clicks, or …
- The research is being conducted by a research company (not SurveyUSA, by the way), which is trying to produce its own internal learning about Google, to be able to share with its clients, possibly recognizing that eventually the research company will need to steer certain clients to #gsurveys, or possibly with a goal of discrediting #gsurveys and cautioning its clients not to use #gsurveys.
Are we over-thinking this? Who do you think is conducting research on coin flips? And what could they possibly hope to learn here, if not: is anybody reading the question before clicking on an answer?