Have you ever wanted to know which attributes of your product, service, or brand are preferred by your customers?

You may have a need to drill down during product R&D for efficiencies sake or maybe you have a limited budget and need to make sure your targeting is more effective.

You would be right to be concerned that by just asking consumers to rate all attributions it could lead to all showing up in the middle of the scale or all highly rated if they love your product, and that as a result you won’t be able to gain a clear insight into which product attributions your consumers value the most and which attributions they care about the least.

This is when a MaxDiff question or more complex experimental design can come to the rescue.

Whether you’re investigating brand preferences, product features, message testing, or something else, a maximum difference (aka MaxDiff) experiment is a very useful approach for obtaining preference or importance scores for multiple attributes.

A MaxDiff assumes that respondents evaluate all possible pairs of items within a displayed set and then choose the pair that reflects the maximum difference in preference for them. So you as the researcher can determine, across a representative sample, which attributes stand out the most.

In its simplest form, a MaxDiff is a question consisting of perhaps 4 to 7 items where the participants get to select between things like “Best/Worst” or “Least/Most”.

But, let’s say you have 10, 15, 20, or even 30 product attributions or claims you want to test to be able to understand which ones your consumers find the most and least compelling. Asking the participant to select only two items representing the anchoring choices is not good research practice.

This is because you’re missing out on a lot of information about the items that were not selected. Not to mention the cognitive overload of someone trying to pick two options out of 10 or more. The alternative is to divide the total claims into subgroups. For example, dividing 20 items into subgroups of 5.

But then comes the next challenge and also brings us back to the title….

What is the one question you should ask before conducting a MaxDiff study?

It is often the case that consumer insights teams ask, “is it randomized?” A good and valid question to ask. You should want the items to be as randomized as possible. But, that’s not the question you should ask!

Randomization of 20 items presents its own challenges around combinations and permutations. Deep down, you may have a repressed memory of that probability and statistics course you had to take years ago, where the curriculum asked questions such as how many combinations of 5 item sub-groups you can make out of 20 total items?

No need to fetch the dusty old text book, the formula is nCr = n!/r!*(n-r)!
(*No need to bookmark this post either
because SightX will handle this for you!*)

Let’s put the formula to work then. The number of 5 item groups you can create from 20 items is: 20!/5!*(20-5)!= 15,504.

Yes, that is the actual number of combinations that one can generate in this scenario. This number of possible combinations increases exponentially when the number of variables increases.

So how do you solve for this? Enter Professor Jordan Louviere, whose team at the Centre for the Study of Choice pioneered this work with the concept of “Balanced Design”.

The question that an insights professional should always make sure to ask is “is the design balanced?”

The following are the criteria for a balanced design that your study should have to avoid inaccurate results:

**Frequency Balance**: Each item should appear an equal number of times across your respondents;**Orthogonality**: Each item is paired with each other item an equal number of times;**Connectivity**: A set of items is connected if the items cannot be divided into two groups where any item within one group is never paired with any item within the other group. For example, items: A, B, C, D. We pair AB and CD. If we assign each pair in a separate group we won’t be able to know relative preferences across the items, since A was never used within the pairs of the other group. However, if we had asked pairs AB, BC, and CD, then all items would be interconnected. Even though many pairs (such as AC and AD) had not been asked, we could infer the relative order of preference of these;**Positional Balance**: Each item appears an equal number of times on the left and right.

While asking about randomization is good, asking about balanced design is critical to the accuracy of your results.

So now with a balanced design, you’re able to conduct a much more reasonably sized study. Practical guidelines for a sample size may range between roughly 200 to 1,000 given your stated research goals and what you’re trying to measure, compare, etc.

Automating Curiosity