1000minds’ ‘secret spice’ is a method we invented and christened ‘PAPRIKA’.
For reasons that will become clear below, PAPRIKA is an acronym for Potentially All Pairwise RanKings of all possible Alternatives.
The method is gently (non-technically) explained below. In addition, more detailed and technically-oriented information is available from these ‘external’ sources:
- For an overview, read the Wikipedia article.
- For technical details, see our journal article: P Hansen & F Ombler (2008), “A new method for scoring multi-attribute value models using pairwise rankings of alternatives”, Journal of Multi-Criteria Decision Analysis 15, 87-107.
The PAPRIKA method involves the decision-maker – e.g. you! – answering a series of simple questions, based on your expert knowledge and subjective judgment. Each question is based on choosing between two hypothetical alternatives defined on just two criteria or attributes at a time and involving a tradeoff (in effect, the other criteria/attributes are the same).
An example of a question in the context of Multi-Criteria Decision-Making (MCDM, also known as Multi-Criteria Decision Analysis, MCDA) appears below – involving choosing between ‘projects’ (a generic example most people can probably relate to). An example and explanation of PAPRIKA in the context of Conjoint Analysis is also available.
Each time you answer a pairwise-ranking question, PAPRIKA adapts. Based on your answer, and all preceding ones, PAPRIKA chooses another question for you to answer. Again based on your answer, another question is presented. And then another, and another; and so on. This adaptivity means that in some contexts PAPRIKA is recognized as a type of adaptive conjoint analysis.
The easiest possible questions for eliciting preferences
PAPRIKA’s questions are based on ‘partial profiles’ – beginning with just two criteria at a time – in contrast to alternative ‘full-profile’ methods which involve all criteria together at once (e.g. seven or more). The obvious advantage of such simple questions is that they are relatively easy to think about – so you can have more confidence in your answers!
Indeed, if you think about it ... choosing one alternative from two, defined on just two criteria at a time, is the easiest possible decision in the universe! In contrast, choosing one alternative from three or more is harder; and so is choosing between alternatives defined on more than two criteria.
Such simple questions are repeated with different pairs of hypothetical alternatives – always involving trade-offs between different combinations of the criteria, two at a time – until enough information about your preferences has been collected to rank accurately the alternatives you’re considering (as described on the criteria).
It’s possible to proceed to the ‘next level of decision-making’, involving three criteria at a time, and then four, five, etc – up to the number of criteria you’ve included – but for most practical purposes this makes no difference to the accuracy of your results. Stopping after having answered questions involving just two criteria at a time is usually sufficient.
From your answers, mathematical methods based on linear programming are used to calculate your preference values (or ’part-worth utilities’), representing the relative importance (weights) of the criteria, and of the levels for the criteria, to you.
These criteria and their weights serve to codify your preferences with respect to the decision at hand. Automatically implemented in 1000minds, they can be used to rank any alternatives you’ re considering, or, depending on the application, all hypothetically possible alternatives (all possible combinations of the criteria).
What limits the number of questions, so that PAPRIKA is user-friendly?
Each time you pairwise rank a pair of alternatives (‘projects’, in the example above), PAPRIKA immediately identifies all other pairs of hypothetical alternatives that can be pairwise ranked and eliminates them. It does this by applying a logical property known as ‘transitivity’.
For example, if you rank alternative X ahead of alternative Y and also Y ahead of alternative Z, then, logically – by transitivity! – X must be ranked ahead of Z. And so PAPRIKA ranks this third pair implicitly, and any others similarly implied by transitivity, and eliminates them, so that you’re not burdened by being asked any redundant questions pertaining to these implied rankings.
(This is why each time you answer a question, 1000minds pauses for a split second to ‘think’ about which other questions to eliminate, and what one to ask next.)
This elimination procedure ensures that the number of questions you are asked is minimized. And yet in the process of answering a relatively small number of questions you end up having pairwise ranked all hypothetical alternatives differentiated on two criteria at a time, either explicitly or implicitly (by transitivity). Thus, you reveal enough information about your preferences for PAPRIKA to be able to report your ’part-worth utilities’ with confidence in their accuracy.
The number of questions to answer depends on the number of criteria and levels you’ve entered for your application: the more criteria and/or levels, the more decisions. For example, with four criteria and three or four levels within each criterion, approximately 30 decisions are required.
The remarkable feat achieved by PAPRIKA (implemented by 1000minds) – hence the patents and awards – is that in ‘real time’ and very efficiently, it keeps track of all of the potentially millions of pairwise rankings of the hypothetical alternatives implied by decision-makers’ answers (as a result of transitivity).
Thus, the PAPRIKA method manages to identify Potentially All Pairwise RanKings of all possible Alternatives (PAPRIKA) representable by the criteria. Because the pairwise rankings are consistent, a complete overall ranking of alternatives is defined, based on your preferences.
How does PAPRIKA compare with other methods, such as AHP?
From a user’s perspective, PAPRIKA’s defining feature is that it is based on ‘pairwise rankings’ of alternatives – e.g. see the screenshot above again.
Pairwise ranking – i.e. choosing one alternative from two – is a natural type of decision activity that everyone has experience of in their daily lives. Therefore, as well as being easier to make, such decisions are likely to be more accurate (valid and reliable) than other methods.
In contrast to PAPRIKA, most other methods used by other decision-making software rely on ‘scaling’ or ‘ratio’ measurements of decision-makers’ preferences, which are far less natural – less human!
For example, the well-known Analytic Hierarchy Process (AHP) – e.g. used by Decision Lens and Expert Choice – asks:
“On a nine-point scale ranging from ‘equally preferred’ (ratio = 1) to ‘extreme importance’ (ratio = 9), how much more important to you is criterion A than criterion B” – e.g. as in the example above: “How much more important is ‘urgency / importance ...’ than ‘likely effectiveness ...’ for prioritizing projects?”
Most people would agree that PAPRIKA’s questions are more natural than AHP’s – because, as mentioned earlier, picking one alternative from just two possibilities is the easiest choice in the universe!
Obviously too, PAPRIKA involves a simple ordinal – ranking – measurement of decision-makers’ preferences rather than a (more cognitively challenging) scaling or ratio measurement – e.g. as in the AHP example above, “On a nine-point scale ranging from ‘equally preferred’ (ratio = 1) to ‘extreme importance’ (ratio = 9), how much more important to you is criterion A than criterion B?“
Thus, in summary, you can have greater confidence in your answers to the questions posed by the PAPRIKA method and ultimately in decisions reached with the help of 1000minds.
Another major strength of the PAPRIKA method when it is implemented in surveys – i.e. a 1000minds preferences survey is run – is that preference values (weights) are generated for each individual participant, in contrast to other methods that produce aggregate data only.
Individual-level data enables, for example, cluster analysis to be performed to identify ’clusters’ of participants with similar patterns of preferences.
How does PAPRIKA relate to data mining and machine learning?
Both the PAPRIKA method and data mining in its various forms (e.g. machine learning) create ‘predictive algorithms’ in terms of criteria and preference values (or part-worth utilities) on the criteria for predicting decision-makers’ choices or other outcomes of interest (depending on the application). Where the two approaches differ, however, is in their methods for creating algorithms.
As explained earlier, PAPRIKA elicits decision-makers’ expert knowledge and subjective judgments by asking them simple questions involving trade-offs between the criteria for the decision at hand. From their answers, PAPRIKA determines their preference values, representing the criteria’s relative importance, which are used in the resulting algorithm to predict decision-makers’ choices. Thus, decision-makers can be thought of as using PAPRIKA to ‘train’ the algorithm (criteria and preference values).
In contrast, data mining and machine learning involve identifying patterns in large data sets. Sophisticated computational and statistical techniques are used to quantify relationships between the outcomes of interest and observed variables (e.g. ‘criteria’) in the data set. (These techniques and the resulting algorithm are often incomprehensible ‘black boxes’ – even to their creators!)
Data mining and machine learning are only as valid and reliable as the data they are based on. PAPRIKA is obviously superior when no data are available for mining or learning from – PAPRIKA can be thought of as ‘mining’ the necessary preference data out of decision-makers themselves. And if the data for data mining/machine learning are in some way biased (e.g. incomplete or prejudiced), this bias might be reproduced, and potentially amplified, in the algorithms created.
Because decision-makers’ answers to PAPRIKA’s trade-off questions are recorded in 1000minds, this information – upon which the resulting algorithm (criteria and preference values) is based – is fully transparent, auditable and potentially revisable.