Additive points systems are widely used to combine alternatives’ characteristics on multiple criteria for the purpose of ranking alternatives.

Additive points systems (hereinafter simply ‘points systems’) are also known variously as ‘scoring’, ‘linear’ and ‘point-count’ systems. In the MCDM / MCDA literature they’re sometimes known as ‘additive multiple criteria value models with performance categories’.

Points systems are a common solution to the pervasive problem in a wide variety of applications of how to combine alternatives’ characteristics on multiple criteria (or attributes) for the purpose of obtaining an overall ranking of alternatives.

A points system is simply a schedule of ‘point values’ (referred to as ‘preference values’ in 1000minds) for each criterion, where each criterion is demarcated into two or more mutually-exclusive levels (see the example below).

Within a given points system, the point values for each level represent both the relative importance and the performance on a particular criterion. Thus (and for more details, see interpreting preference values):

- Each criterion’s relative importance, or ‘weight’ – i.e. relative to the other criteria – is represented by the point (or ‘preference’) value of its highest-ranked level.
- A criterion’s point value(s) between the lowest and highest levels represent both the criterion’s relative importance and the levels’ performances relative to the highest level – hence ‘middle’ values are less than the value of the highest level.

The criteria for a points system may be quantitative (*eg*. *experience*, as below) or qualitative in nature (the four other criteria). For criteria that are not naturally categorical it is usually possible to represent them in terms of levels that are appropriate to the application (which must be listed within each criterion from lowest ranked to highest ranked).

A points system works by each alternative (or individual) that’s being considered being graded on the system’s criteria and the corresponding point values summed to obtain a total score for each alternative, by which they are ranked or classified relative to each other.

Clearly, the additive nature of (*additive*) points systems means that, by construction, interaction effects between the criteria are ruled out (*ie*. the criteria are independent).

It should be emphasised in particular that a points system is not intended as a replacement for ‘human’ decision-making. A points system is simply a tool to support decision-makers by increasing the validity and reliability of the decision-making process, and to make such processes more explicit and transparent (*ie*. accountable).

### An example

Here is a simple example of a points system for deciding which job applicants to hire, with five criteria and two, three or four levels on each criterion respectively:

Criterion |
Level |
Points |
---|---|---|

Qualifications |
poor | 0 |

average | 8 | |

good | 20 | |

excellent | 40 | |

References |
poor | 0 |

average | 9 | |

good | 18 | |

Social Skills | poor | 0 |

good | 20 | |

Experience | < 1 year | 0 |

1 - 4 years | 2 | |

> 4 years | 10 | |

Enthusiasm | poor | 0 |

good | 12 | |

Total Score = |
_____ |

Thus, in the simple example of the two job applicants, Peter and Fran, mentioned in the Introduction to MCDM / MCDA, they would be scored as in the table below, where it can be seen that Fran is ranked ahead of Peter in terms of their total scores.

Criterion | Applicant | |
---|---|---|

Peter | Fran | |

Qualifications | good = 20 | poor = 0 |

References | average = 9 | average = 9 |

Social Skills | poor = 0 | good = 20 |

Experience | > 4 years = 10 | > 4 years = 10 |

Enthusiasm | poor = 0 | good = 12 |

Total Score = |
39 points (2nd) |
51 points (1st) |

### Normalised weights and single criterion scores

As well as the (unweighted) additive form above, points systems may also be implemented, equivalently, in terms of normalised criterion weights (summing to 1) and single criterion scores (with a common scale, such as from 0 to 100 points).

Thus the points system above can be expressed, equivalently (demonstrated below), as:

Criterion | Criterion weight | Level | Single criterion score |
---|---|---|---|

Qualifications | 0.40 | poor | 0 |

average | 20 | ||

good | 50 | ||

excellent | 100 | ||

References | 0.18 | poor | 0 |

average | 50 | ||

good | 100 | ||

Social Skills | 0.20 | poor | 0 |

good | 100 | ||

Experience | 0.10 | < 1 year | 0 |

1 - 4 years | 20 | ||

> 4 years | 100 | ||

Enthusiasm | 0.12 | poor | 0 |

good | 100 | ||

Total Score = |
_____ |

The equivalence of these two representations (unweighted and weighted) can be easily confirmed by calculating Peter’s and Fran’s total scores using both approaches.

**(1) Unweighted Points System**

Peter’s total score = 20 + 9 + 0 + 10 + 0 = 39 points

Fran’s total score = 0 + 9 + 20 + 10 + 12 = 51 points (both as in the 2nd table above)

**(2) ‘Normalised criterion weights and single criterion scores’**

Peter’s total score = (0.4 x 50) + (0.18 x 50) + (0.20 x 0) + (0.10 x 100) + (0.12 x 0) = 39 points

Fran’s total score = (0.4 x 0) + (0.18 x 50) + (0.20 x 100) + (0.10 x 100) + (0.12 x 100) = 51 points

Although they are equivalent, many users consider the unweighted representation to be simpler to use.

### Simplicity and accuracy

As mentioned earlier, points systems (in either of the representations above) are used in a wide variety of applications. As well as being simple to use, points systems have been found to be more accurate than the unaided ‘expert’ judgements of decision-makers.

This is the near-universal finding of a large number of studies, as surveyed by Hastie & Dawes (2010 – references appear at the end of this page), who concluded that (p. 52):

“... the simple linear model [*ie*. points system] is surprisingly successful in many applications. We say ‘surprisingly’, because many judges claim that their mental processes are much more complex than the linear summary equations would suggest – although empirically, the equation does a remarkably good job of ‘capturing’ their judgment habits.”

This is because (p. 60):

“The mind is in many essential respects a linear weighting and adding device. In fact, much of what we know about the neural networks in the physical brain suggests that a natural computation for such a ‘machine’ is weighting and adding, exactly the fundamental processes that are well described by linear equations [*ie*. points systems].”

Similarly, according to Hadorn et al. (2003, p. 49):

“[Points Systems] are, as a rule, more accurate than human predictors. This is not surprising, as it has been known for decades that human cognitive performance is limited with respect to complex multi-variable judgment and prediction tasks (Meehl 1954).”

### Creating a points system

When creating a points system, two things are essential in order for it to be accurate (in terms of helping users make the ‘right’ decisions):

- The decision criteria included must be relevant to the decision being made.

- Their relative importance must be accurately represented by the points system’s point values – so that the decision maker’s preferred ranking of individuals/alternatives is reproduced by the ranking of total scores.

For example, with respect to (2), are the point values in the points system shown above (in the tables) right? How would you know? Certainly, they will result in a particular ranking of job applicants. Clearly, though, by changing the point values, the ranking will change (likely affecting which applicants are selected – and which are not). Should the top score for *qualifications*, for example, really be 40 points? How about 36, 38, or 32? (Or even 20 or 60?!)

A variety of methods is available for determining the point values for a points system (sometimes known as ‘scoring’ the points system). The approach used by 1000minds is known as the PAPRIKA method.

### References

DC Hadorn & the Steering Committee of the Western Canada Waiting List Project (2003) “Setting priorities on waiting lists: Point-count systems as linear models”. *Journal of Health Services Research & Policy* 8, 48-54.

R Hastie & RM Dawes (2010) *Rational Choice in an Uncertain World. The Psychology of Judgement & Decision Making*. (2nd edition) Sage Publications.

PE Meehl (1954) *Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence.* University of Minnesota Press.