Conjoint analysis is a survey-based method widely used in market research, new product design, government policy-making and the social sciences to understand people’s preferences, and to shape products and policies accordingly.

In the social sciences, conjoint analysis is also known as a discrete choice experiment (DCE) (McFadden 1974) or, more generally, as choice modelling.

Conjoint analysis (or DCEs / choice modelling) is an invaluable tool for use by businesses, nonprofit and government organizations in all sectors. In this article, we will explain what conjoint analysis is and how it works, and how 1000minds can help you achieve your goals in uncovering people’s preferences.

What is conjoint analysis?

Conjoint analysis is used to evaluate people’s preferences regarding the relative importance of the various attributes or characteristics of a given object of interest. This information is then used to design or improve products based on consumer preferences, or to make decisions involving many stakeholders, such as for government policy-making.

Examples of conjoint analysis include:

  • Discovering the features that consumers care about most in smartphones, in order to design and market the most successful smartphone model.
  • Understanding how employees feel about the various benefits offered by an employer, such as pay relative to holidays, health benefits, etc, so that the benefits package maximizing employees satisfaction is offered.
  • Analyzing citizen preferences over the use of natural resources like land and water that are subject to trade-offs between farmers, recreationists, conservationists, etc, in order to shape public policy.
  • ... and much, much more! For example, most consumer products in our lives today benefited from conjoint analysis when they were being developed. Conjoint analysis is increasingly used for designing government policies too (e.g. Au, Coleman & Sullivan 2015).

Conjoint analysis terminology

Conjoint analysis has its own very specialised terminology, which can be a little bit daunting for people who are new to the area – but don’t worry, you’ll soon appreciate that the fundamental concepts are intuitively simple!

As illustrated below, conjoint analysis involves survey participants expressing their preferences by repeatedly choosing their preferred alternatives from ‘choice sets’ consisting of two or more hypothetical alternatives defined in terms of two or more attributes at a time.

Typically, the alternatives in the choice set are defined so that there is a trade-off between the attributes (hence, conjoint analysis used to be known as ‘trade-off analysis’). An example of the simplest possible choice set appears at Figure 1 below (for a conjoint analysis into smartphones).

Because choices are fundamental to conjoint analysis, it is often referred to as ‘choice-based conjoint analysis’ (CBC);. Also, the name ‘conjoint analysis’ comes from the the fact that the alternatives in the choice sets are defined as combinations of two or more attributes, i.e. they are connected to each other (like conjoined twins).

Finally, conjoint analysis is also known as a discrete choice experiment (DCE) (McFadden 1974). This name comes about because participants in a DCE (or conjoint analysis) are asked to make a series of distinct (‘discrete’) choices in an imaginary (‘experimental’) setting.

Key elements of conjoint analysis (or a DCE) include:

  • Attributes: Features or characteristics of the product or other alternative of interest, with two or more levels of performance or achievement – e.g. for a conjoint analysis into smartphones (an example most of us can relate to), the attributes would likely include operating speed and performance, camera quality, battery life, etc.
  • Alternatives: Combinations of attributes representing particular products or other alternatives of interest. Alternatives are called concepts or profiles – e.g. different configurations of a phone with respect to its features.
  • Utilities: Values (or weights) representing the relative importance of the attributes. Utilies are also called part-worths or part-worth utilities – e.g. reflecting how consumers feel about phones with high quality cameras versus long battery life, etc.

How does conjoint analysis work?

Conjoint analysis works by presenting people with a survey where they are repeatedly asked to decide which of two or more hypothetical alternatives (or ‘concepts’ or ‘profiles’, as above) they prefer.

Each alternative is made up of a different combination of attributes (e.g. Figure 1 and Figure 2 below), so that when participants choose between the alternatives they are forced to make trade-offs between the attributes – as we all do in real-world decision-making (e.g. choosing a new phone).

From people’s choices, utilities (weights) representing the relative importance of the attributes are determined.

These weights can be used to rank any alternatives being considered (i.e. combinations of attributes representing particular products or other alternatives of interest), thereby revealing the ‘best’, ‘second-best’, ‘third-best’, etc alternatives.

These utilities and rankings of alternatives are extremely useful information for designing products or policies (depending on the application) that people most desire.

A range of methods are available for performing conjoint analysis, as supported by specialised software or statistics packages.

A prominent example of specialised conjoint-analysis software is 1000minds (implementing the PAPRIKA method, Hansen & Ombler 2008). If you would like to, you can easily and immediately experience this software for yourself by creating a 1000minds account.

The main differences between conjoint analysis methods are summarized in the four sub-sections below.

Number of alternatives in choice sets

Conjoint analysis methods can be differentiated with respect to how many hypothetical alternatives are included in the choice sets presented to survey participants, typically, ranging from two to five alternatives at a time:

  • For choice sets with just two alternatives at a time (e.g. Figure 1 below), survey participants are asked to pairwise rank them, i.e. choose the ‘better’ alternative.
  • For choice sets with three alternatives at a time, participants are asked to rank them overall (1st, 2nd, 3rd).
  • For choice sets with four or five alternatives at a time, participants are asked to select just the ‘best’ and ‘worst’ of them – known as ‘best-worst scaling’ (Louviere, Flynn & Marley 2015).

The advantage of conjoint analysis based on choice sets with two alternatives at a time is that choosing between just two alternatives is the simplest choice possible; obviously, it is simpler (and easier) than ranking or choosing from three or more alternatives. Choosing between two things at a time (i.e. pairwise ranking) is a natural human decision-making activity. We make choices between two things many times a day: e.g. would you like cup of coffee or tea? Shall we walk or drive? etc.

This simplicity ensures that participants can make their choices with greater confidence (and faster), leading to survey results that more accurately reflect participants’ true preferences.

Partial-profile versus full-profile conjoint analysis

The pairwise ranking task referred to above can be simplified even more by defining each hypothetical alternative included in a choice set on just two attributes (or criteria) at a time. This type of conjoint analysis is known as ‘partial-profile’ conjoint analysis.

In contrast, full-profile conjoint analysis is based on choice sets where the alternatives are defined on all attributes together at a time (e.g. six or more). Examples of partial- and full-profile choice sets respectively appear in Figure 1 and 2 below (for a conjoint analysis into smartphones).

Pairwise ranking full-profile choice sets is more cognitively difficult (and slower) than pairwise ranking partial-profile choice sets. Therefore, all else being equal, conjoint analysis results obtained using full-profiles are less likely to be accurate, albeit such choice sets are arguably more realistic.

Figure 1: Example of a partial-profile choice set

Example question involving two attributes

Figure 2: Example of a full-profile choice set

Example question involving six attributes

Adaptive versus non-adaptive conjoint analysis

Another key differentiator of conjoint analysis methods is whether or not the choice sets presented to each participant are presented in an adaptive or non-adaptive way.

For non-adaptive methods, participants are presented with pre-determined (fixed) choice sets (e.g. a dozen or more). Sub-samples of participants are usually presented with common choice sets because of the huge number of choice sets that are possible.

In contrast, adaptive methods such as the PAPRIKA method involve participants being asked their own personal choice sets based on how they rank them. Each time a participant answers a pairwise ranking question, PAPRIKA chooses another question based on that ranking and all preceding ones – i.e. PAPRIKA adapts. Based on the person’s next answer, another question is presented, and then another, and another, and so on.

Methods for calculating utilities

Through the choices that participants make, the conjoint analysis uncovers people’s preferences and assigns utilities to the attributes – numerical values that represent the relative importance of the attributes. Further analyzes are possible (see below), and the utilities can be used to identify an ideal alternative (e.g. consumer product or government policy).

The PAPRIKA method uses mathematical methods based on linear programming to calculate the utilities for each participant. Technical details are available in Hansen & Ombler 2008).

Other conjoint analysis methods apply regression techniques (e.g. multinomial logit analysis and hierarchical Bayes estimation) to calculate the utilities. These methods are explained in detail in (Train 2009).

A major strength of the PAPRIKA method is that utilities are generated for each individual participant, in contrast to other methods that produce aggregate data only. Individual-level data enables, for example, cluster analysis to be performed to identify ’clusters’ (groups) of people with similar patterns of preferences.

Why use 1000minds for conjoint analysis?

1000minds is the only conjoint analysis software created specifically to reduce the cognitive burden on survey participants as much as possible, while maximizing reliability, repeatability and the scientific validity of the results.

Owing to the scientific rigor and user-friendliness of the PAPRIKA pairwise comparison method at the heart of 1000minds (see our awards), it is used by businesses and government agencies across all sectors, as well as for research and teaching at 450+ universities and other research organizations worldwide.

We have a wide range of conjoint analysis examples available in our app to help you get started, which are designed to give you a feel for how 1000minds works. Try them out now (you’ll be asked to create a free trial account if you don’t have one already – very easy!).

Our conjoint analysis surveys allow you to include as many participants as you like, potentially 1000s. Participants can self-enroll from a sign-up webpage – e.g. great for ‘convenience’ or ‘snowball’ sampling. Or, if you know people’s email addresses, you can enter them into 1000minds, and they’ll be invited to take part.

A fun demonstration!

The following link is to a conjoint analysis survey set up for helping you to choose a breed of cat as a pet! This light-hearted example neatly demonstrates many of 1000minds’ features from a survey participant’s perspective.

Free demo: Take the cat breed survey

1000minds – adaptive choice-based conjoint analysis (ACBC)

1000minds developed the award-winning PAPRIKA pairwise comparison method to determine people’s utilities (weights) by asking questions based on choosing between pairs of alternatives defined on two attributes at a time and a involving trade-off (Figure 1 above). Hence this type of conjoint analysis is referred to as ‘choice-based’.

In addition, 1000minds is a type of ‘adaptive’ conjoint analysis because each time a choice is made, 1000minds adapts by formulating a new question to ask based on all previous choices.

1000minds is fast and scaleable. No extra analysis is needed to derive standard conjoint analysis outputs (see analyzing outputs). Also, potentially 1000s of people can participate in conjoint analysis surveys.

Conjoint analysis example

Suppose the conjoint analysis survey is to discover what consumers of ‘flavoured milk drinks’ care about (generalisable to other products or alternatives of interest too).

The survey would usually involve each survey participant answering a series of questions involving trade-offs between attributes associated with flavoured milk drinks – e.g. taste, nutrition, price, shelf life, brand image.

So that you can better relate to the example below, you should do the survey for yourself.

From each participant’s answers to the survey questions, ‘utilities’, representing the relative importance of the attributes, are calculated. These utilities are then used to rank different flavoured milk drink ‘alternatives’ (i.e. combinations of the attributes), including choosing the ‘best’ alternative – e.g. for a manufacturer to produce.

These basic conjoint analysis outputs are now presented and analyzed in various useful ways. Though the example here has a marketing-research focus, the ideas illustrated below can be generalised to other conjoint analysis applications too (e.g. with a government policy focus).

The outputs below are from a 1000minds conjoint analysis survey, implementing the PAPRIKA pairwise comparison method. A major strength of the PAPRIKA method is that utilities are generated for each individual participant, in contrast to other methods that produce aggregate results only. Individual-level data enables more in-depth analysis, as illustrated below.

Conjoint analysis outputs

Part-worth utilities

For simplicity, suppose there are just five participants in the survey (of course a real survey would probably involve 100s or 1000s of participants): Consumers ‘X’, ‘Y’, ‘Z’, ‘Paul’ and ‘Alfonse’, as in the tables below.

First, here are the utilities for each participant – in this example relating to attributes associated with flavoured milk drinks – as well as the usual summary statistics (median, mean, standard deviation).

The value for each level on an attribute represents the combined effect of the attribute’s relative importance (weight) and its degree of achievement as reflected by the level (for more information, see interpreting preference (utility) values.

Part-worth utilities table

As well as the utilities reported above, here are the ‘normalized’ attribute weights and scores – an alternative, though equivalent, representation of the mean utility values (second-last column above). This equivalence is easily confirmed by multiplying the weights and single attribute scores to reproduce the (mean) utilities above.

Normalized attribute weights table

Attribute rankings

Consistent with the utilities data above, here are the rankings of the attributes for each of five participants.

Attribute rankings table

Radar chart

The data in the first chart can be visualized in several ways, including using a ‘radar’ chart.

This chart indicates the strength of preferences for the attribute shown by each of the five participants; each one has a differently colored ‘web’; the further from the centre of the chart, the more important the attribute. The thick black line shows the mean values.

Radar chart

Attribute relative importance

These ratios – sometimes known as ‘marginal rates of substitution’ (MRS) – capture the relative importance of the column attribute for the row attribute (based on the mean utilities).

Table of relative importance of attributes

Rankings of entered alternatives

Although utilities (as above) are interesting, there is also enormous power in applying each individual’s preferences to new product alternatives and also to competitors’ offerings, in order to predict the likely market share or market shift that might occur.

Such analysis is useful for answering questions like, “What would it take to make Product A the market leader (or to, at least, increase its market share)?”

Here are 12 illustrative product alternatives for flavoured milk drinks.

Rankings of alternatives table

Market shares

The five participants’ utilities can be easily applied to the 12 alternatives by calculating ‘total utility’ scores for each alternative – simply by summing the values for each level on each attribute for each alternative, and the alternatives are then ranked for each participant by their total scores. (The linearity of the equation means that, by construction, interaction effects between the attributes are ruled out – i.e. the attributes are independent.)

Thus, it can be seen below that 60% of participants in the survey (3 out of 5 participants) would have chosen (i.e. ‘bought’) Product C and 40% (2 out of 5) would have chosen Product G.

By contrast, just 20% of participants (1 participant) would rank Product A as their 3rd most-preferred product (and probably not buy it).

Of course, just five participants is insufficient to represent the market for flavoured milk drinks! More realistically, 500 – or 1000! – survey participants would be necessary, but hopefully you get the idea of how this analysis works. Note that how a sample is selected – e.g. randomly – is more important than just sample size.

Market shares table

Also, look at the table below to see the frequencies of ranks for each of the 12 alternatives – where we can see that 3 of the 5 participants would rank Product A 4th and 1 participant each would rank it 3rd and 6th respectively.

Rank frequency table

The number in each cell is the number of participants – out of 5 in the survey – who would give the identified alternative the identified rank.

Market simulations (“What ifs?”)

If our objective is to answer a question like, “What would it take to make Product A the market leader?”, we can make predictions, based on the utilities from the survey, as to what would happen if Product A’s attributes were changed. (As mentioned before, bear in mind that just five participants is insufficient to properly simulate a market.)

Below is a comparison of the total utilities for Product A versus Product C (the current market leader) disaggregated across the five attributes (see below for color coding).

Attribute shares table

Concepts by attribute figure

Clearly, relative to Product C, Product A is deficient with respect to its brand image and it is more expensive (on the other hand, A is superior with respect to shelf life).

Other attribute fine-tunings are possible too; e.g. if lowering Product A’s price were infeasible, then improving its brand image and its nutrition would be sufficient to overtake Product C. This can be discerned from these Tornado Charts: −/+ 1 level (one-way sensitivity analysis):

Tornado chart sensitivity analysis

As product alternatives are refined – e.g. improving Product A’s brand image and nutrition (as above) – we can see the impact this may have on the market. In this case Product A could be expected to take a 70% market share (based on these 12 alternatives and the five participants’ preferences), at the expense of the market shares of Products C and G.

Concept rankings table Scatter chart

Rankings of all possible alternatives

In addition to rankings of particular entered alternatives (e.g. 12, as above), it’s possible to see rankings of all theoretical combinations of the attributes – in this example, 3 × 3 × 4 × 3 × 3 = 324 alternatives; here are the first 20:

All possible concepts table

Further analysis

Based on the outputs above, the following types of analysis are easily performed using Excel or, for the cluster analysis, a statistics package (e.g. SPSS, Stata, MATLAB).

Willingness-to-pay (WTP)

The usual way of calculating WTP is to calculate the number of currency units (e.g. dollars) that each utility unit – often referred to as a ‘utile’ – is worth. And then it’s easy to convert all the non-monetary attributes – valued in terms of utiles (utilities) – into monetary equivalents, which can be interpreted as WTP.

Thus, for example, using the mean utilities (as reproduced below), a price fall from $6 to $3 (i.e. $3) corresponds to a utility gain of 20.7 – 0.0 = 20.7 utiles. Therefore, 1 utile is worth $3/20.6 = 14.5 cents. Applying this ‘price’ of 14.5 cents per utile allows us to convert the utilities associated with the non-monetary attributes into WTPs.

  Mean WTP
Nothing special 0%  
Quite good 11.6% $1.68
Delicious 16.9% $2.44
Fattening 0%  
Non-fattening, but not nutritious 6.5% $0.94
Non-fattening, and nutritious (e.g. calcium rich) 19.1% $2.76
Price (per 500 ml bottle)    
$6 0%  
$5 4.9%  
$4 14.3%  
$3 20.7%  
Shelf Life    
Short shelf life 0%  
Medium shelf life 7.0% $1.02
Long shelf life 15.3% $2.22
Brand image    
Dull (a bit embarrassing) 0%  
OK (but not cool) 13.4% $1.94
Cool 28.0% $4.06

Cluster (market segmentation) analysis

As mentioned earlier, a major strength of 1000minds is that utilities are generated for each individual decision-maker, in contrast to other methods that only produce aggregate data from the group of decision-makers.

Individual-level data enables cluster analysis to be performed (i.e. after exporting to Excel and then using a statistics package) in order to identify ’clusters’ – or ‘market segments’ – of participants with similar preferences (as represented by their utilities).

The schematic below illustrates the main idea behind the ‘k-means clustering method’, which may be briefly explained as follows.

  • Imagine there are just 2 attributes, as represented by the x and y axes in the panels below, and each point in the space corresponds to a participant’s utilities (on x and y attributes).
  • The k-means clustering algorithm starts by asking the analyst to set the number of potential clusters (referred to as “k-means” – k signifying the number of clusters); in the schematic there are 3 (i.e. k = 3).
  • A starting point (x,y co-ordinates) is randomly chosen for each of the 3 yet-to-be-discovered clusters; see Panel A.
  • Next, all the individuals in the space are clustered to whichever of the 3 individuals (x,y co-ordinates) they are closest to; see Panel B.
  • Then a new representative centre – i.e. mean value – for each of the nascent clusters is calculated for each of the clusters; see Panel C.
  • And the process repeats: All the individuals are clustered to whichever of the 3 individuals (x,y co-ordinates) they are closest to, and this keeps repeating until no further changes are possible; see Panel D.

Cluster analysis example charts

Finally, having identified clusters of utilities (e.g. 3 clusters, as above) , the usual next step is to test the extent to which each cluster is associated with observable socio-demographic characteristics (e.g. age, gender, etc) or other consumer behaviors, in order to define targetable market segments.

Finding survey participants

Depending on your application, here are some ways you can get your Conjoint Analysis survey in front of the right people.

If you know the people already

If you are surveying a group of people you already know, e.g. your customers or colleagues, you can load their email addresses into 1000minds to send from there. Or you can create a link (URL) for the survey within 1000minds and send the link from your own email system. The latter is recommended as you are likely to get more engagement from an email sent by someone trusted – you!

If you know where they hang out

If you know what your audience looks at, e.g. a Facebook page, Reddit, your customer portal, a train station or a milk carton, you can share a link to your survey there.

Snowball sampling

If your survey is interesting enough, you might be able to share your survey with a small group of people, and at the end of the survey ask them to share it on social media, by email etc, essentially creating a snowball effect (read the Wikipedia article).


You can use Facebook advertising, Google AdWords, etc to create targeted advertisements for your survey. This can be useful if your survey is interesting enough, or if your advertisement offers a reward.

Survey panels

You can purchase a sample of participants from a survey panel provider or market research company. This is particularly useful if you need a sample to be demographically representative, for example. Such panels often have information from their panelists that let you target your survey according to their demographics or interests.

One of our preferred panel suppliers is Cint. Cint boasts “over 155 million engaged respondents across more than 130 countries”, providing an online dashboard so that you can price, automate and report on the survey process.

Another panel supplier we’ve worked well with is Dynata, whose “global reach includes more than 62 million consumers and business professionals”.

Further reading

Seminal articles about Conjoint Analysis in the marketing literature include:

Try it out for free!

Create a free 1000minds account to get a feel for how 1000minds conjoint analysis can meet your needs, or schedule a free demo with one of our experts today!


J Au, A Coleman & T Sullivan (2015), “A practical approach to well-being based policy development: What do New Zealanders want from their retirement income policies?”, New Zealand Treasury Working Papers 15/14.

P Hansen & F Ombler (2008), “A new method for scoring multi-attribute value models using pairwise rankings of alternatives”, Journal of Multi-Criteria Decision Analysis 15, 87-107.

J Louviere, T Flynn & A Marley (2015), Best-Worst Scaling: Theory, Methods and Applications, Cambridge University Press.

D McFadden (1974), “Conditional logit analysis of qualitative choice behavior”, Chapter 4 in: P Zarembka (editor), Frontiers in Econometrics, Academic Press.

K Train (2009), Discrete Choice Methods with Simulation, Cambridge University Press.