1000minds implements a method for performing conjoint analysis that we invented and christened ‘PAPRIKA’.
For reasons that will become clear below, PAPRIKA is an acronym for Potentially All Pairwise RanKings of all possible Alternatives.
The method is gently (non-technically) explained below. In addition, more detailed and technically-oriented information is available from these ‘external’ sources:
- For an overview, read the Wikipedia article.
- For technical details, see our journal article: P Hansen & F Ombler (2008), “A new method for scoring multi-attribute value models using pairwise rankings of alternatives”, Journal of Multi-Criteria Decision Analysis 15, 87-107.
The PAPRIKA method involves the participant – e.g. you! – answering a series of simple pairwise comparison questions, based on your expert knowledge and subjective judgment.
Each question is based on choosing between two hypothetical ‘concepts’ (i.e. alternatives) defined on just two attributes at a time and involving a tradeoff (in effect, the other attributes are the same). An example of a question – involving designing a car – appears below.
Each time you answer a question, PAPRIKA adapts. Based on your answer, and all preceding ones, PAPRIKA chooses another question for you to answer. Again based on your answer, another question is presented. And then another, and another; and so on. This adaptivity means that PAPRIKA is recognized as a type of adaptive conjoint analysis.
The easiest possible choices!
PAPRIKA’s questions are based on ‘partial profiles’ – beginning with just two attributes at a time – in contrast to alternative ‘full-profile’ methods which involve all attributes together at once (e.g. seven or more). The obvious advantage of such simple questions is that they are relatively easy to think about – so you can have more confidence in your answers!
Indeed, if you think about it ... choosing one alternative (‘concept’) from two, defined on just two attributes at a time, is the easiest possible choice in the universe! In contrast, choosing one alternative from three or more is harder; and so is choosing between alternatives defined on more than two attributes.
Such simple questions are repeated with different pairs of hypothetical concepts – always involving trade-offs between different combinations of attributes, two at a time – until enough information about your preferences has been collected to rank accurately the concepts you’re considering (as described on the attributes).
It’s possible to proceed to the ‘next level of decision-making’, involving three attributes at a time, and then four, five, etc – up to the number of attributes you’ve included – but for most practical purposes this makes no difference to the accuracy of your results. Stopping after having answered questions involving just two attributes at a time is usually sufficient.
From your answers, mathematical methods based on linear programming are used to calculate your ’part-worth utilities’, representing the relative importance (weights) of the attributes to you.
These utilities are used to rank any concepts you’re considering, or, depending on the application, all hypothetically possible alternatives/concepts (all possible combinations of the attributes).
What limits the number of choices, so that PAPRIKA is user-friendly?
Each time you pairwise rank a pair of concepts (‘cars’, in the example above), PAPRIKA immediately identifies all other pairs of hypothetical concepts that can be pairwise ranked and eliminates them. It does this by applying a logical property known as ‘transitivity’.
For example, if you rank concept X ahead of concept Y and also Y ahead of concept Z, then, logically – by transitivity! – X must be ranked ahead of Z. And so PAPRIKA ranks this third pair implicitly, and any others similarly implied by transitivity, and eliminates them, so that you’re not burdened by being asked any redundant questions pertaining to these implied rankings.
(This is why each time you answer a question, 1000minds pauses for a split second to ‘think’ about which other questions to eliminate, and what one to ask next.)
This elimination procedure ensures that the number of questions you are asked is minimized. And yet in the process of answering a relatively small number of questions you end up having pairwise ranked all hypothetical concepts differentiated on two attributes at a time, either explicitly or implicitly (by transitivity). Thus, you reveal enough information about your preferences for PAPRIKA to be able to report your ’part-worth utilities’ with confidence in their accuracy.
The number of choices depends on the number of attributes and levels you’ve entered for your application: the more attributes and/or levels, the more choices. For example, with four attributes and three or four levels within each attribute, approximately 30 choices are required.
The remarkable feat achieved by PAPRIKA (implemented by 1000minds) – hence the patents and awards – is that in ‘real time’ and very efficiently, it keeps track of all of the potentially millions of pairwise rankings of the hypothetical concepts implied by decision-makers’ answers (as a result of transitivity).
Thus, the PAPRIKA method manages to identify Potentially All Pairwise RanKings of all possible Alternatives (PAPRIKA) representable by the attributes. Because the pairwise rankings are consistent, a complete overall ranking of concepts is defined, based on your preferences.
As well as its scientific validity and user-friendliness, a major strength of the PAPRIKA method when it is implemented in surveys – i.e. a 1000minds conjoint survey is run – is that part-worth utilities (weights) are generated for each individual participant (here is an example), in contrast to other methods that produce aggregate data only.
Individual-level data enables, for example, cluster analysis to be performed to identify ’clusters’ of participants with similar patterns of preferences.
How does PAPRIKA relate to data mining and machine learning?
Both the PAPRIKA method and data mining in its various forms (e.g. machine learning) create ‘predictive algorithms’ in terms of criteria and preference values (or part-worth utilities) on the criteria for predicting decision-makers’ choices or other outcomes of interest (depending on the application). Where the two approaches differ, however, is in their methods for creating algorithms.
As explained earlier, PAPRIKA elicits decision-makers’ expert knowledge and subjective judgments by asking them simple questions involving trade-offs between the criteria for the decision at hand. From their answers, PAPRIKA determines their preference values, representing the criteria’s relative importance, which are used in the resulting algorithm to predict decision-makers’ choices. Thus, decision-makers can be thought of as using PAPRIKA to ‘train’ the algorithm (criteria and preference values).
In contrast, data mining and machine learning involve identifying patterns in large data sets. Sophisticated computational and statistical techniques are used to quantify relationships between the outcomes of interest and observed variables (e.g. ‘criteria’) in the data set. (These techniques and the resulting algorithm are often incomprehensible ‘black boxes’ – even to their creators!)
Data mining and machine learning are only as valid and reliable as the data they are based on. PAPRIKA is obviously superior when no data are available for mining or learning from – PAPRIKA can be thought of as ‘mining’ the necessary preference data out of decision-makers themselves. And if the data for data mining/machine learning are in some way biased (e.g. incomplete or prejudiced), this bias might be reproduced, and potentially amplified, in the algorithms created.
Because decision-makers’ answers to PAPRIKA’s trade-off questions are recorded in 1000minds, this information – upon which the resulting algorithm (criteria and preference values) is based – is fully transparent, auditable and potentially revisable.