When the results of the survey don’t “feel” right?

I read an article in the paper this week where one of our local elected leaders indicated there was some discrepancy between the results of a recent survey and the response they had personally experienced.

What the councilor was expressing is a concern for survey error………but also their own bias in which responses are most valued by them personally, those generated online or those from someone who walks up to them on the street.

It is interesting to observe survey metric reporting in our little community. Though hundreds of persons could fill out a survey indicating a choice, their responses are lessened by elected officials who invariably value the oratory complaint. Sage publications in 2008 gave a short list of the most common types of survey error.

  1. coverage error
  2. sampling error
  3. response error
  4. measurement error.

Coverage implies the survey didn’t properly collect responses from the population due to an over-reliance on some type of collection method (too many face to face, too many online, etc.)

Sampling error is when the group surveyed is too small, not truly representative of the whole, or some kind of externality that prevented the intended survey group from being adequately represented.

Response error is a common error, and is usually teased out when you survey an appropriate number of persons. A great example of response error is when someone surveyed provides false information on purpose. We see this quite a bit when there is a question regarding what income bracket persons identify.

Lastly, measurement error usually comes through “poor question wording, with faulty assumptions and imperfect scales.” This is often a problem when the questions are verbose and highly influenced by the survey creator.

Some of my favorite bad survey questions……….

Do you support President Trump? (Yes/No)………this is an absolute fallacy, and is designed to camp people who are already polarized. Try again.

The last time you drank, what type of beer did you have?………this makes a faulty assumption, what if that person didn’t drink a beer?

These are all varying examples of how surveys can be wrongly performed, the post however is more about how we interpret them.

When thousands take a survey, correctly worded or not, and we make statements that the results don’t match what we observe in our own conversations we are walking a very slippery slope. What is really being admitted in that statement is that in the eyes of that person, the oratory response is more valued than any online response. This presents a problem for any survey respondent that seeks convenience of filling out an online form. Knowing that an elected official will invalidate their opinion because it wasn’t voiced at the barber shop, beauty parlor, or town hall, creates an untenable dynamic for an evolving populace that invariably prefers the convenience of third party surveys to having to find your elected on facebook or in the phonebook (apparently that’s a thing) and tell them your thoughts.

As our community moves forward it is important for all of those involved in survey for public consumption to consider response weighting and understand how proper survey technique and bias handling impact results. We also need to take a step back and remember that if a million survey results indicate something that my gut believes is right…………..my gut is probably wrong and I need to read up on overconfidence bias.