Podcast: Play in new window | Download (Duration: 13:50 — 12.7MB) | Embed
Subscribe: Google Podcasts | Email | RSS
We use the failure of last week’s general election’s exit polling as a starting point to discuss the lack of quality in survey research, particularly in the B2B tech marketing space. Both of us have been the recipients of lousy survey “results,” or more accurately, wishful thinking on the part of marketing and PR people. So save everyone’s energies: don’t produce these 200-person SurveyMonkey polls that have no real meaning. Better yet, when a reporter wants to see the survey instrument and the underlying methodology, send it. You’ll gain plenty of street cred and may even get some ink too.
Whether the presidential polling was the result of ignoring sampling errors or not understanding that the extreme negative response of voters to both candidates is hard to say. Grant Gross’ excellent story in CIO.com goes into more detail about why you can’t pin these failures on big data in general, and quotes several sources that say we have to do a better job of understanding the dynamics of the traditional exit polls themselves.
Our recommendations are to pay careful attention to survey size, understand the sampling methodology, make use of a professional pollster or research analyst or statistician and learn from the experts.
For this last suggestion: we are both big fans of Nora Barnes and the Univ. of Mass/Dartmouth team she heads that have looked at social media usage among the top corporations. They are pros at conducting both observational research and doing terrific polls over many years. Much of their core research doesn’t involve surveys at all. They analyze data that’s already out there.
Leave a Reply