1000minds’ ‘secret spice’ is a method we invented and christened ‘PAPRIKA’.

For reasons that will become clear below, PAPRIKA is an acronym for * Potentially All Pairwise RanKings of all possible Alternatives*.

The PAPRIKA method (and 1000minds) is patented in three countries – US, New Zealand and Australia – and has been recognized in eight innovation/impact awards.

Information about PAPRIKA is available from these two ‘external’ sources:

- For an overview, read the Wikipedia article.
- For technical details, see our journal article: P Hansen & F Ombler (2008), “A new method for scoring multi-attribute value models using pairwise rankings of alternatives”,
*Journal of Multi-Criteria Decision Analysis*15, 87-107.

The essence of PAPRIKA may be gently (non-technically) explained as follows.

## PAPRIKA

The PAPRIKA method involves the decision-maker – e.g. you! – answering a series of simple questions, based on your expert knowledge and subjective judgment. Each question is based on choosing between two hypothetical alternatives defined on just two criteria or attributes at a time and involving a tradeoff (in effect, the other criteria/attributes are the same).

An example of a question in the context of Multi-Criteria Decision-Making (MCDM, also known as Multi-Criteria Decision Analysis, MCDA) appears below – involving choosing between ‘projects’ (a generic example most people can probably relate to). An example and explanation of PAPRIKA in the context of Conjoint Analysis is also available.

Each time you answer a pairwise-ranking question, PAPRIKA *adapts*. Based on your answer, and all preceding ones, PAPRIKA chooses another question for you to answer. Again based on your answer, another question is presented. And then another, and another; and so on. This adaptivity means that in some contexts PAPRIKA is recognized as a type of *adaptive *conjoint analysis.

### Easy questions for eliciting preferences

PAPRIKA’s questions can be thought of as being based on ‘*partial *profiles’ – beginning with just two criteria at a time – in contrast to alternative ‘*full*-profile’ methods which involve all criteria together at once (e.g. seven or more). The obvious advantage of such simple questions is that they are relatively easy to think about – so you can have more confidence in your answers!

Indeed, if you think about it ... choosing one alternative from two, defined on just two criteria at a time, is the easiest possible decision in the universe! In contrast, choosing one alternative from three or more is harder; and so is choosing between alternatives defined on more than two criteria.

Such simple questions are repeated with different pairs of hypothetical alternatives – always involving trade-offs between different combinations of criteria, two at a time – until enough information about your preferences has been collected to rank accurately the alternatives you’re considering (as described on the criteria).

It’s possible to proceed to the ‘next level of decision-making’, involving three criteria at a time, and then four, five, etc – up to the number of criteria you’ve included – but for most practical purposes this makes no difference to the accuracy of your results. Stopping after having answered questions involving just two criteria at a time is usually sufficient.

From your answers, mathematical methods based on linear programming are used to calculate your preference values (or ’part-worth utilities’), representing the relative importance (weights) of the criteria to you.

These preference values are used to rank any alternatives you’ re considering, or, depending on the application, all hypothetically possible alternatives (all possible combinations of the criteria).

### What limits the number of questions, so that PAPRIKA is user-friendly?

Each time you pairwise rank a pair of alternatives (‘projects’, in the example above), PAPRIKA immediately identifies all other pairs of hypothetical alternatives that can be pairwise ranked and eliminates them. It does this by applying a logical property known as ‘transitivity’.

For example, if you rank alternative *X* ahead of alternative *Y* and also *Y* ahead of alternative *Z*, then, logically – by transitivity! – *X* must be ranked ahead of *Z*. And so PAPRIKA ranks this third pair implicitly, and any others similarly implied by transitivity, and eliminates them, so that you’re not burdened by being asked any redundant questions pertaining to these implied rankings.

This elimination procedure ensures that the number of questions you are asked is minimized. And yet in the process of answering a relatively small number of questions you end up having pairwise ranked all hypothetical alternatives differentiated on two criteria at a time, either explicitly or implicitly (by transitivity).

The number of questions to answer depends on the number of criteria and levels you’ve entered for your application: the more criteria and/or levels, the more decisions. For example, with four criteria and three or four levels within each criterion, approximately 30 decisions are required.

The remarkable feat achieved by PAPRIKA (implemented by 1000minds) – hence the patents and awards – is that in ‘real time’ and very efficiently, it keeps track of all of the potentially millions of pairwise rankings of the hypothetical alternatives implied by decision-makers’ answers (as a result of transitivity).

Thus, the PAPRIKA method manages to identify * Potentially All Pairwise RanKings of all possible Alternatives *(PAPRIKA) representable by the criteria. Because the pairwise rankings are consistent, a complete overall ranking of alternatives is defined, based on your preferences.

### How does PAPRIKA compare with other methods, such as AHP?

From a user’s perspective, PAPRIKA’s defining feature is that it is based on ‘*pairwise rankings*’ of alternatives – e.g. see the screenshot above again.

Pairwise ranking – i.e. choosing one alternative from two – is a natural type of decision activity that everyone has experience of in their daily lives. Therefore, as well as being easier to make, such decisions are likely to be more accurate (valid and reliable) than other methods.

In contrast to PAPRIKA, most other methods used by other decision-making software rely on ‘scaling’ or ‘ratio’ measurements of decision-makers’ preferences, which are far less natural – less human!

For example, the well-known Analytic Hierarchy Process (AHP) asks:

“On a scale from 1 to 9, how many times more (or less) important is criterion *A* than criterion *B* – e.g. as in the example above, how much more important is ‘urgency / importance ...’ than ‘likely effectiveness ...’ for prioritizing projects?”

Most people would agree that PAPRIKA’s questions are more natural than AHP’s – because, as mentioned earlier, picking one alternative from just two possibilities is the easiest choice in the universe!

Obviously too, PAPRIKA involves a simple *ordinal* – ranking – measurement of decision-makers’ preferences rather than a (more cognitively challenging) *scaling* or *ratio* measurement – e.g. as in the AHP example above, “On a scale from 1 to 9, how many times more important is criterion *A* than criterion *B*?“

Thus, in summary, you can have greater confidence in your answers to the questions posed by the PAPRIKA method and ultimately in decisions reached with the help of 1000minds.

### But wait, there’s more!

Another major strength of the PAPRIKA method when it is implemented in surveys – i.e. a 1000minds preferences survey is run – is that preference values (weights) are generated for each *individual* participant, in contrast to other methods that produce *aggregate* data only.

Individual-level data enables, for example, cluster analysis to be performed to identify ’clusters’ of participants with similar patterns of preferences.

### See also

Wikipedia article about PAPRIKA method