Have you ever wanted to know which attributes of your product, service, or brand your customers prefer?

You may have a need to drill down during product R&D for development efficiency, or maybe you have a limited marketing budget and need better target your messaging for maximum engagement.

If you ask consumers to rate the features of your product, there is a chance that they might all end up somewhere in the middle of the scale. This won't tell you much about which features they value most and which they care for the least.

This is precisely where MaxDiff comes into play.

### What is a MaxDiff Experiment?

Whether you’re investigating brand preferences, product features, or messaging, a maximum difference (aka MaxDiff) experiment is a useful research approach for obtaining preference or importance scores for multiple attributes.

A MaxDiff experiment presents your respondents with a series of questions, each containing a group of attributes. Respondents then choose their favorite attribute (best) and the one they are least enthusiastic about (worst). This allows researchers to determine, across a representative sample, which attributes stand out the most.

Conducting a MaxDiff Experiment

In its simplest form, a MaxDiff experiment questions respondents about 4 to 7 attributes, asking them to select the best and worst options from each set.

But, let’s say you have 10, 15, 20, or even 30 product features you want to test to better understand which your consumers find the most and least compelling.

Presenting participants with a wealth of attributes and asking them to select only two items representing the anchoring choices is not necessarily a good research practice. This is because you’re missing out on a lot of information about the items that were *not *selected. Not to mention the fact that having someone pick two options out of 10 or more can be quite overwhelming.

The alternative option is to divide the total claims into subgroups. For example, dividing 20 items into subgroups of 5.

But then comes the next challenge, which coincidentally brings us back to the title….

What is the one question you should ask before conducting a MaxDiff study?

It is often the case that consumer insights teams ask, “is it randomized?” This is a good and valid question to ask. You should want the items to be as randomized as possible. But, that’s not *the *question you should ask!

Randomization of 20 items presents its own challenges around combinations and permutations. Deep down, you may have a repressed memory of that probability and statistics course you had to take years ago, where the curriculum asked questions such as how many combinations of 5 item sub-groups you can make out of 20 total items?

No need to fetch your dusty old textbook, the formula is nCr = n!/r!*(n-r)!

No need to bookmark this post-the SightX platform will handle this for you!

For the sake of understanding, let’s put the formula to work. The number of 5 item groups you can create from 20 items is: 20!/5!*(20-5)!= 15,504.

There are a total of *15,504 *possible combinations of variables!

Yes, that is the actual number of combinations that one can generate in this scenario. This number of possible combinations increases exponentially when the number of variables increases.

So how do you solve for this?

Enter Professor Jordan Louviere, whose team at the Centre for the Study of Choice pioneered this work with the concept of “Balanced Design”.

*The question that an insights professional should always make sure to ask is: “is the design balanced?”*

The following are the criteria for a balanced design that your study should have to avoid inaccurate results:

- Frequency Balance: Each item should appear an equal number of times across your respondents;
- Orthogonality: Each item is paired with each other item an equal number of times;
- Connectivity: A set of items is connected if the items cannot be divided into two groups where any item within one group is never paired with any item within the other group. For example, items: A, B, C, D. We pair AB and CD. If we assign each pair in a separate group we won’t be able to know relative preferences across the items, since A was never used within the pairs of the other group. However, if we had asked pairs AB, BC, and CD, then all items would be interconnected. Even though many pairs (such as AC and AD) had not been asked, we could infer the relative order of preference of these;
- Positional Balance: Each item appears an equal number of times on the left and right.

While asking about randomization is good, asking about balanced design is crucial to the accuracy of your results.

So now with a balanced design, you’re able to conduct a much more reasonably sized study. Practical guidelines for a sample size may range between roughly 200 to 1,000 given your stated research goals and what you’re trying to measure, compare, etc.

Ready to conduct your own MaxDiff experiments? Reach out to our team to get started today!