Concept Testing: Forced to Choose Just One?
I was recently watching a PechaKucha presentation that was presented in Chicago late last year. For anyone not familiar with PechaKucha, it’s a monthly event held in many cities where presenters have just under seven minutes to present on a topic of their choice. The catch? Each presenter’s presentation must be 20 slides long, with a maximum of 20 seconds spent on each slide. So, they have to keep it moving. For anyone in the business world, it’s great practice to hone your presentation skills!
Without realizing it, I stumbled on a topic that was not only relevant in terms of inspiration for a concise presentation but also one that made me think a little more about how we do research.
In the presentation titled “Fanta Wins,” the presenter, Rob Schaaf, talked about better voting practices in elections – specifically about the practice of asking voters to put their votes behind just one candidate. It’s what we’ve always done – so I thought: “What’s wrong with that?” As a market researcher, I design surveys where people need to make choices, so I wasn’t completely onboard with his point of view.
Regardless, I continued watching. To keep politics/partisanship out of it, Rob used an example of students voting for a beverage to be in their school vending machine – with the two main parties being soda and juice. Oh, and then there was the “spoiler” party – milk. Of course, it’s not realistic that a vending machine would have just one beverage, and as researchers, we all immediately think TURF! (For those non-researchers, TURF = what products should we offer to reach the greatest number of different people?) But, to prove a point, let’s go with it…
Starting with the primaries:
- The soda party is divided, with no strong support for any one candidate (lots of in-party fighting), and Fanta ends up winning over all others (vs. Pepsi, Coca Cola, etc.).
- Juice leaders push hard for one candidate and apple ends up winning.
In the general election, you might think apple has the advantage. But, then, along comes the “spoiler” milk, with the candidate 2% Milk – and Fanta wins without even getting the majority of the votes.
What are the alternatives?
- One group (FairVote.org) pushes for approval ballots – where people vote for anyone they approve of. How would this have changed things?
- Another group (ElectionScience.org) pushes for ranked-choice ballots – where people rank the candidates in order of preference. How would this have changed things?
Both have their arguments for which is the best approach, but I won’t get into that here. In the end, the idea is that there wouldn’t be so much mud-slinging in politics, as candidates wouldn’t want to completely alienate another candidate’s supporters – especially in primaries where many candidates are on the table. In addition, there would be more room for the smaller parties – currently, some are afraid to vote for one of those candidates, as they’re worried about the “spoiler” effect. So, they either don’t vote or pick what they consider to be the lesser of two evils among the front-runners.
That got me thinking – should we re-think how we’re asking questions in any of our surveys? In the end, I don’t think we need to change too much.
The kinds of surveys most applicable to the election scenario are concept tests. And, many times, we’re essentially using one of these methods already – the approval ballot method. Specifically, in sequential monadic concept tests, we have respondents rate multiple concepts on a variety of measures, and then we compare them. There’s nothing that prevents a respondent from rating multiple concepts highly.
That being said, sometimes we also employ the ranked-choice method – if we ask respondents to rank the concepts from most to least favorite after evaluating them (as this can help us break ties in the rating data).
Where might we want to change things? There have been times where instead of a ranking question, we ask respondents to select their most preferred concept as a tie-breaker (as it’s generally easier and quicker than a ranking question). However, upon further consideration, this might not be the best option. If there are a lot of concepts to rank, we could ask them to rank their top 3 to-5 – which would be more useful than trying to make them pick one.
In the end, I think we’re doing a pretty good job of preventing some of the same issues that plague our election system. It remains to be seen when/how long it takes for that system to change.