As the name implies, Multi-Criteria Decision-Making (MCDM) is about methods – including software – for making decisions when multiple criteria (or objectives) need to be considered together, in order to rank or choose between the alternatives being evaluated.

This article explains the main ideas behind MCDM in general, in non-technical terms and with reference to the academic literature. Intended readers include people who are new to the area or in need of a ‘refresher’.

MCDM / MCDA

As suggested by the breadth of the description above, there are many 1000s of possible applications for Multi-Criteria Decision-Making (MCDM, also known as Multi-Criteria Decision Analysis, MCDA). Most decisions made by individuals, groups and organizations are amenable to MCDM.

Commonplace examples of MCDM applications from business, government and household decision-making include:

  • Short-listing job applicants
  • Selecting new projects or investments
  • Prioritizing patients for access to health care (NZ health system case study)
  • Ranking grant funding or scholarship applications
  • Choosing a new home or car, etc

Common to the above examples is that they all involve alternatives (including people) being ranked, prioritized or chosen based on multiple criteria being considered together. Some applications also include the allocation of budgets or other scarce resources across the alternatives, with the objective of maximizing ‘value for money’.

Of course, considering multiple criteria together in order to rank or choose between alternatives is a natural decision-making approach as old as human history (see famous quotes). However, ‘traditional’ decision-making – and how most people make their everyday decisions – usually involves weighting the criteria and evaluating trade-offs in an intuitive or holistic manner.

By contrast, MCDM (or MCDA), a sub-discipline of operations research with foundations in economics, psychology and mathematics and increasingly supported by specialised software, is concerned with more formally structuring and solving decision problems, usually involving the explicit weighting of criteria and the trade-offs between them.

Overall, MCDM is intended to reduce the incidence and impact of biases from decision-makers relying on their ‘gut feeling’, and also group decision-making failures, that almost inevitably afflict intuitive decision-making. By making the weights and associated trade-offs between the criteria explicit in a structured way, MCDM results in more transparent and consistent decisions.

MCDM components

In short, MCDM involves these four key components:

  • Alternatives (or individuals) to be ranked or chosen between
  • Criteria by which the alternatives are evaluated and compared
  • Weights representing the relative importance of the criteria
  • Decision-makers (and, potentially, other stakeholders), whose preferences are to be represented

Most MCDM applications include fewer than a dozen criteria, which may be quantitative or qualitative in nature, with 5-7 criteria being typical. With respect to the number of alternatives evaluated, MCDM applications may be differentiated with respect to whether they are for a ‘one-off’ or ‘repeated’ purposes.

One-off applications involve ranking particular alternatives or individuals that are already known to the decision-maker, such as ranking applicants applying for a job or prioritizing new business projects, etc – i.e. where the alternatives are usually numbered, at most, in their 10s or 100s.

In contrast to one-off applications, repeated MCDM applications involve ranking alternatives (or individuals) in a pool that is continually changing – e.g. involving potentially 1000s of alternatives. For example, in health and education applications, new patients or students (‘alternatives’) may need to be prioritized – e.g. for treatment or scholarships – on an ongoing basis over time. This dynamism means that potentially all hypothetically-possible alternatives must be able to be considered.

Overview of the MCDM process

Most MCDM applications are based, at least implicitly, on the process represented in the diagram below (from Belton & Stewart 2002). The “Model building” and “Challenging thinking” stages, in particular, are often supported by specialised MCDM software, as discussed later below.

The iterativeness of the MCDM process represented in the diagram, with multiple possible feedbacks and cycles, serves to emphasise that MCDM is intended to function as a tool to help people, individually or in groups, to reach a decision – i.e. their decision (made by humans), not the tool’s decision.

As well as more transparent and consistent decision-making, MCDM can also be used to facilitate stakeholder participation and provide a systematic means by which their preferences can be taken into account. MCDM results can also be used to communicate and justify the final decision to stakeholders.

Process of MCDA - Figure 1.1 - Belton and Stewart 2002

A simple example of MCDM model building

Consider the example of an employer who wants to rank applicants for a job. Here, defining the relevant criteria is likely to be relatively straight-forward – e.g. most jobs would include these three criteria (and others too perhaps): qualifications, experience and references.

The more challenging issue for the decision-maker (the employer), however, is likely to be determining accurate weights for the criteria, reflecting their relative importance – so that when job applicants are assessed on the criteria, they are accurately ranked from ‘best’ to ‘worst’.

For simplicity, suppose there are just two applicants, Romeo and Juliet, who are rated on the three criteria as in Table 1.

Table 1: Romeo’s and Juliet’s ratings on the criteria

Criterion Applicant rating
  Romeo Juliet
Qualifications ‘poor’ ‘good’
Experience ‘good’ ‘poor’
References ‘good’ ‘good’

Given that Romeo and Juliet are the same with respect to their references (both ‘good’), whether Romeo is ‘better’ or ‘worse’ than Juliet depends on whether qualifications is more important than experience, or vice versa.

A common-sense approach for aggregating each alternative’s (here, job applicant’s) performance in terms of their ratings across the criteria is to create a ‘weighted-sum model’ (also known as a ‘points system’, and other names are mentioned later below).

Such an approach involves:

  1. Weighting the criteria to reflect their relative importance (where the weights sum to 1); and
  2. Scoring each alternative according to its rating on each criterion (usually in the range 0-100).

Each alternative’s overall performance across the criteria is aggregated via a linear (i.e. additive) equation to produce a ‘total score’ (also in the range 0-100), and the alternatives are ranked by their total scores.

An example of criterion weights and scores for ranking job applicants appears in Table 2.

Table 2: A weighted-sum model for ranking job applicants

Criterion Criterion weight Level Criterion score
Qualifications 0.50 poor 0
    fair 20
    good 80
    excellent 100
       
Experience 0.32 poor 0
    fair 50
    good 100
       
References 0.18 poor 0
    good 100

Applying this model to Romeo’s and Juliet’s ratings on the three criteria (Table 1) generates their total scores (out of a theoretical maximum of 100 points):

  • Romeo’s total score (‘poor’, ‘good’, ‘good’): (0.5 x 0) + (0.32 x 100) + (0.18 x 100) = 50 points
  • Juliet’s total score (‘good’, ‘poor’, ‘good’): (0.5 x 80) + (0.32 x 0) + (0.18 x 100) = 58 points

Based on their total scores, Juliet is ranked ahead of Romeo for the job [albeit the theoretically best candidate gets 100 points: i.e. (0.5 x 100) + (0.32 x 100) + (0.18 x 100)].

It should be obvious from this example that the ranking of applicants depends on the criteria and their weights. Different criteria or weights will produced different total scores and, potentially, different rankings – and, potentially, a different final decision. (The weights and scores above were simply made up, for the sake of this simple illustration.)

Therefore, fundamental to the implementation of MCDM is:

  1. The specification of the ‘right’ criteria (and their levels); and
  2. The determination of accurate (i.e. valid and reliable) weights

The weights should reflect decision-makers’ (and, potentially, other stakeholders’) preferences with respect to the relative importance of the criteria – so that the resulting ranking of alternatives is as accurate as possible. In most applications, determing weights requires specialised methods to elicit decision-makers’ expert judgements.

MCDM software

MCDM is increasingly supported by specialised software, such as 1000minds. MCDM software frees “the facilitator/analyst and decision-maker from the technical implementation details, allowing them to focus on the fundamental value judgments” (Belton & Stewart 2002, p. 345).

MCDM software is especially useful for applications involving many alternatives and criteria, and when the methods for determining the weights on the criteria (and scoring the alternatives on the criteria) – steps 4 and 5 in the MCDM process explained later below – are technically sophisticated.

Also, software capable of surveying potentially 1000s of people is useful for MCDM applications that involve eliciting and analyzing the preferences of members of the general population.

MCDM software, ranging from the most basic to the state of the art, is catalogued in these three overviews:

This article

Most of the remainder of this article, which is adapted from parts of Hansen & Devlin (2019) – the first author led the writing of both articles – is devoted to explaining the steps for creating and applying weighted-sum models, including methods for specifying criteria and determining their weights.

First, though, the origins and emergence of MCDM are discussed briefly, and this is followed by a more detailed discussion of weighted-sum models (as introduced above).

Origins and emergence of MCDM

Historically, the first example of a formal MCDM approach is usually recognised as being Benjamin Franklin’s “moral or prudential algebra” from 1772 (Franklin 1973).

Franklin’s approach, which he explained in a letter to his friend Joseph Priestley, involved tabulating the ‘pros’ and ‘cons’ of the alternative being evaluated relative to another alternative (e.g. the status quo), and then successively trading them off – in effect, weighting them – in order to identify the better alternative. Though this simple approach is effective for decisions involving choosing between two alternatives, it does not scale up for more alternatives.

More than 200 years later, more technically sophisticated methods for choosing between more than just two alternatives, and potentially involving multiple decision-makers, were presented in Ralph Keeney’s and Howard Raiffa’s seminal book Decisions with Multiple Objectives: Preferences and Value Tradeoffs (Keeney & Raiffa 1993), first published in 1976 and again in 1993.

In 1979, Stanley Zionts helped popularise the abbreviation MCDM (for Multiple Criteria Decision-Making) with his article for managers: “MCDM – If not a Roman numeral, then what?” (Zionts 1979). Other significant events from MCDM’s history are discussed in the book by Köksalan, Wallenius & Zionts (2011).

As mentioned above, MCDM is increasingly supported by specialised software. Before the existence of the World Wide Web in the 1990s, most MCDM software was based on spreadsheets– e.g. an early example was described in 1973 (Dyer 1973). Nowadays, MCDM software is increasingly web-based.

Weighted-sum models

As mentioned earlier, most MCDM applications are based on ‘weighted-sum models’ – also known in the MCDM academic literature as ‘additive multi-attribute value models’.

As explained earlier (and see Table 2 again), weighted-sum models involve decision-makers explicitly weighting the criteria for the decision problem being addressed and rating the alternatives on the criteria. Each alternative’s overall performance on the criteria is aggregated via a linear (i.e. additive) equation to produce the alternative’s ‘total score’, by which the alternatives are ranked.

Points systems

A commonplace and equivalent representation of the linear equations at the heart of weighted-sum models is a schedule of ‘point values’ (sometimes referred to as ‘preference values’) for each criterion: commonly known as a ‘points system’ – also as ‘additive’, ‘linear’, ‘scoring’, ‘point-count’ and ‘points’ models or systems.

The points-system equivalent of the weighted-sum model in Table 2 appears in Table 3, where the point value for each level on a criterion represents the combined effect of the criterion’s relative importance (weight) and its degree of achievement as reflected by the level (for more information, see interpreting preference values).

In other words, consistent with the linear equations described above, a level’s point value can be obtained by multiplying the criterion weight by the criterion score from the equation – as can be easily confirmed by multiplying the weights by the scores in Table 2, as reported in the fourth column below (for illustrative purposes). (Likewise, a point value can be ‘decomposed’ – in the opposite direction – into the criterion’s weight and score.)

Arguably, a simple schedule of point values (a points system) is easier to implement than an equation (a set of weights and scores that need to be multiplied together). And in tabular form, a points system takes up less space (e.g. compare Table 2 and Table 3)!

Table 3: A points system for ranking job applicants

Criterion Level Points (illustrative
calculation)
Qualifications poor 0 (0.5 x 0)
  fair 10 (0.5 x 20)
  good 40 (0.5 x 80)
  excellent 50 (0.5 x 100)
       
Experience poor 0 (0.32 x 0)
  fair 16 (0.32 x 50)
  good 32 (0.32 x 100)
       
References poor 0 (0.18 x 0)
  good 18 (0.18 x 100)

Applying the points system in Table 3 to Romeo’s and Juliet’s ratings on the three criteria (Table 1) generates their total scores (i.e. the same as before, thereby confirming the equivalence of Tables 2 and 3):

  • Romeo’s total score (‘poor’, ‘good’, ‘good’): 0 + 32 + 18 = 50 points
  • Juliet’s total score (‘good’, ‘poor’, ‘good’): 40 + 0 + 18 = 58 points

Again, based on their total scores, Juliet is ranked ahead of Romeo for the job (again, albeit the theoretically best candidate gets 100 points: i.e. 50 + 32 + 18).

‘Structural’ independence

The linearity of ‘total score’ equations – in either their weighted-sum model (Table 2) or points system (Table 3) representations – means that, by construction, interaction effects between the criteria are ruled out. In other words, the criteria are ‘structurally’ independent, meaning that the range of possible ratings of alternatives on one criterion is independent of (i.e. not constrained by) ratings on another criterion, and vice versa.

In the example above, when ranking job applicants, the qualifications and references criteria are structurally independent if it is possible for applicants with excellent qualifications to have poor or good references, as well as it being possible for poorly qualified applications to have poor or good references. (These qualifications and references combinations are possible, right?)

Criteria can be structurally independent despite being highly correlated. For example, job applicants with excellent qualifications usually have good references, but they don’t always (sometimes they have poor qualifications). Thus, structural independence concerns what is possible rather than what is probable.

However, even if two criteria exhibit structural independence (as above), they might not be perfectly independent with respect to some of their extreme combinations being impossible or unrealistic. For example, if the job in the example was ‘heart surgery’, it is hard to imagine a poorly qualified applicant (e.g. without a medical degree) having good references. Hence, as much as possible, the criteria should be structurally independent (if not perfectly so).

Simplicity and accuracy

In practical terms, a major attraction of weighted-sum models – in either their linear equation (Table 2) or points system (Table 3) forms – is the simplicity by which each alternative’s performance on each criterion is aggregated to produce the alternative’s total score. (Non-linear functions for aggregating the criteria, including multiplicative functions, are also possible but very rare.)

More importantly, such simple models have been found nearly universally in very many studies of MCDM applications to be more accurate than the intuitive or holistic judgments of decision-makers (Kahneman 2011).

According to Hastie and Dawes (2010, p. 52), weighted-sum models are:

surprisingly successful in many applications. We say ‘surprisingly’, because many judges claim that their mental processes are much more complex than the linear summary equations would suggest – although empirically, the equation does a remarkably good job of ‘capturing’ their judgment habits.

Hastie and Dawes (p. 52) also explained that:

The mind is in many essential respects a linear weighting and adding device. In fact, much of what we know about the neural networks in the physical brain suggests that a natural computation for such a ‘machine’ is weighting and adding, exactly the fundamental processes that are well described by linear equations.

Also, according to Hadorn et al. (2003, p. 49):

[Weighted-sum models / points systems] are, as a rule, more accurate than human predictors. This is not surprising, as it has been known for decades that human cognitive performance is limited with respect to complex multi-variable judgment and prediction tasks (Meehl 1954).

Other MCDM approaches

It is also worthwhile mentioning (mainly for completeness) that other MCDM-based approaches not based on aggregative functions for combining alternatives’ performance on criteria (i.e. via weights) are also potentially available as alternatives to weighted-sum models (or points systems) – albeit such alternative approaches are unlikely to be particularly useful.

The simplest alternative approach to weighted-sum models is a table – sometimes referred to as a ‘performance matrix’ – for reporting the alternatives’ performance on the criteria (i.e. where each criterion is a column in the table/matrix). When one alternative dominates the others on all criteria, or where the trade-offs involved in selecting an alternative are clear and uncontroversial, decision-makers can use such a table to reach their decision.

Of course, most decision problems are more complicated than this! Most MCDM applications involve confronting non-trivial trade-offs between criteria. Therefore, merely tabulating alternatives’ performance on the criteria is insufficient for most applications (although such a table is useful at step 3 in the MCDM process outlined in the next section).

Outranking methods

A more sophisticated MCDM approach capable of evaluating trade-offs between criteria, and that is not based on aggregative functions for combining alternatives’ performance on criteria (via weights), is the group of ‘outranking’ methods, including:

In essence, outranking methods involve decision-makers’ pairwise rankings of alternatives relative to each other on each criterion in turn and then the combining of the pairwise-ranking results (but not via weights) in order to obtain a measure of support for judging each alternative as the top-ranked alternative overall.

Relative to weighted-sum models, outranking methods are rarely used – probably due to their complexities and the non-intuitive nature of their inputs and algorithms relative to weighted-sum models.

Steps in the MCDM Process

Consistent with the process represented in the diagram at the start of the article, the steps for creating and applying weighted-sum models (or points systems) common to most MCDM applications are summarised in Table 4 and are discussed in turn next.

Given the wide range of possible MCDM application (as mentioned earlier), the steps, as presented here, are necessarily generic. Equivalent processes for specific applications are available in the literature, such as prioritizing patients for access to health care (Hansen et al. 2012).

Although the steps are presented in sequence here, they do not necessarily need to be performed in that order. In particular, step 4 (scoring alternatives on the criteria) and step 5 (weighting the criteria) are intrinsically linked and can be implemented sequentially, simultaneously, or iteratively, depending on the methods employed, and so the two steps are explained together.

Also, earlier steps, such as step 1 (structuring the decision problem) and step 2 (specifying criteria), can be revisited throughout the process as new insights into the particular application emerge and revisions and refinements become desirable.

Though, in principle, the steps can be performed ‘by hand’ (e.g. supported by spreadsheets), many of them, and often entire processes, are supported by MCDM software (as above) – especially useful for applications involving many alternatives and criteria, and when the scoring (step 4) and weighting (step 5) methods used are technically sophisticated.

Table 4: Steps in the MCDM process

Step Brief description
1. Structuring the decision problem Identify objectives, alternatives, decision-makers, any other stakeholders, and the output required.
2. Specifying criteria Specify criteria for the decision that are relevant to decision-makers (and, potentially, other stakeholders).
3. Measuring alternatives’ performance Gather information about the alternatives’ performances on the criteria.
4. Scoring alternatives on the criteria Convert performance measures into scores, representing each alternative’s degree of achievement on the criteria.
5. Weighting the criteria Determine weights for the criteria, representing their relative importance to decision-makers.
6. Applying scores and weights to rank alternatives Multiply alternatives’ scores on the criteria by the weights and sum to get total scores, by which the alternatives are ranked.
7. Supporting decision-making Use MCDM outputs, including sensitivity analysis, to support decision-making – i.e. ranking or selecting alternatives (depending on the application).

Step 1: Structuring the decision problem

The first step involves structuring and framing the decision problem being addressed. It is important to clarify the over-arching objectives of the decision-making exercise.

Related issues include, where possible, identifying the alternatives under consideration, whether the decision is a one-off or repeated application, and the output required from the MCDM (e.g. a ranking or a selection). These elements should be validated with stakeholders.

Step 2: Specifying criteria

The second step involves specifying appropriate criteria for the decision: criteria that are valid, reliable, and relevant to decision-makers and other stakeholders. The criteria should be specified without major overlaps (double-counting) or redundancy (irrelevant criteria).

As much as possible, criteria should be ‘structurally independent’ – i.e. the range of possible ratings on a criterion should be independent of (i.e. not constrained by) ratings on other criteria (even if the criteria are correlated), and vice versa.

Depending on the application, the criteria can be identified from reviews of the literature and similar decisions, and from facilitated discussions, expert opinions, focus groups, and surveys. As in the previous step, stakeholders should be involved in identifying and validating the criteria.

Step 3: Measuring alternatives’ performances

Data about the alternatives’ performance in terms of each of the criteria – which can be presented in a ‘performance matrix’ (table) – can be gathered in a variety of ways, ranging from expert opinions to rapid literature reviews, to full systematic reviews and modelling exercises.

The sophistication and intensity of the data-gathering activity depend on the availability of relevant evidence, the decision problem, and also other practical factors, such as the resources available for the job.

MCDM is capable of combining quantitative and qualitative data, and also subjective judgments (in the absence of ‘harder’ data) with more traditional scientific evidence in the same application.

Steps 4 and 5: Scoring alternatives on the criteria and weighting the criteria

Steps 4 and 5 are explained here together because they are intrinsically linked. They can be performed sequentially, simultaneously, or iteratively, depending on the application.

Scoring the alternatives on the criteria involves converting each alternative’s performance on each criterion into a numerical score. The scores are usually normalised so the worst performance on the criterion gets a score of zero and the highest performance gets 100.

Scores can be implemented using a continuous scale (0-100), or, alternatively, as in the example at the beginning of the article, two or more mutually-exclusive and exhaustive levels (e.g. low, medium, high) can be employed, each with a point value (e.g. medium = 60).

Weighting the criteria involves determining their relative weights – normalised to sum across the criteria to unity – representing each criterion’s relative importance to decision-makers.

The significance of the scoring and weighting steps is highlighted by recognising that even if valid and reliable criteria have been specified (step 2) and the alternatives’ performances have been accurately measured (step 3), but their scores or the weights on the criteria are ‘wrong’ (inaccurate), then, logically, the ‘wrong’ decision, as determined by the ranking of the alternatives’ total scores, is almost certain.

Thus, it is important that the alternatives’ scores and the criteria’s weights are represented as validly and reliably as possible.

A variety of scoring and weighting methods are available, often supported by MCDM software (as above). Many of these methods involve pairwise comparisons of alternatives and/or criteria and can be broadly classified as direct and indirect methods respectively.

Direct methods for scoring alternatives involve decision-makers directly expressing how they feel about the relative performance of each alternative on a criterion relative to the other alternatives. These expressions of a decision-maker’s preferences are usually represented using either an interval scale (e.g. alternatives are rated on a 0-100 scale on a criterion) or a ratio scale measurement (e.g. “alternative A is three times as important as alternative B on a criterion”) from which scores are derived (i.e. directly).

Similarly, direct methods for weighting the criteria involve decision-makers directly expressing how they feel about the relative importance of the criteria, again usually represented in terms of either an interval scale (e.g. criteria are rated on a 0-100 scale) or a ratio scale measurement (e.g. “criterion A is three times as important as criterion B”) from which criterion weights are derived (directly).

In contrast to direct methods, indirect methods involve decision-makers expressing their preferences usually by ranking or choosing between two or more alternatives – real or hypothetical – defined on some or all of the criteria (i.e. ‘partial’ or ‘full profiles’). From these rankings or choices, scores and weights are derived (indirectly) using regression-based techniques or other quantitative methods.

Relative to direct methods, the advantage of indirect methods is that they require decision-makers to express a choice involving a trade-off between the criteria – and so indirect methods are often referred to as choice-based. Because choosing between things is a natural type of decision activity experienced by everyone in their daily lives, choice-based (indirect) methods are likely to be more valid and reliable, as well as easier for decision-makers, than indirect methods.

Direct methods include:

Indirect methods include:

These eight methods are briefly explained in turn below, and more detailed information is available from the references cited above. (Readers disinterested in these descriptions are welcome to skip to the next step – step 6 – in the MCDM process.)

Direct scoring and weighting methods

Direct rating

Scoring alternatives on the criteria

For each criterion, each alternative is rated on a point scale from 0 to 100 – e.g. using a visual analogue scale (VAS) – in proportion to its performance on the criterion relative to the other alternatives.

Alternatives’ scores on the criterion are normalised (or expressed directly) so that the lowest performer gets 0 and the highest performer gets 100 (as for the points allocation and SMART methods, below).

Weighting the criteria

Each criterion is rated on a point scale from 0 to 100 – e.g. using a VAS – in proportion to its importance relative to the other criteria. Weights for the criteria – normalised to sum across the criteria to unity (as for the other methods below, too) – are calculated from ratios of the ratings.

Points allocation

Scoring alternatives on the criteria

For each criterion, a total of 100 points is allocated across the alternatives, in proportion to their relative performance on the criterion.

Weighting the criteria

A total of 100 points is allocated across the criteria, in proportion to their relative importance. Weights for the criteria are calculated from ratios of the points.

SMART

Scoring alternatives on the criteria

For each criterion, the lowest-performing alternative is identified and is given a value of 10 points. The other alternatives are rated relative to this alternative by also assigning points to them.

Weighting the criteria

The least-important criterion is identified and is given a value of 10 points. The other criteria are rated relative to this criterion by also assigning points (of higher value) to them. Weights for the criteria are calculated from ratios of the points.

AHP

Scoring alternatives on the criteria

For each criterion, the alternatives are pairwise compared and their “intensity of importance” relative to each other is expressed on a ratio scale from 1 to 9, usually represented as ranging from “equally preferred” (ratio 1) to “extreme importance” (ratio 9).

Scores for the alternatives are calculated from ratios using eigenvalue analysis, normalised to sum across each criterion to unity (note, in this respect AHP is different from the other scoring methods here).

Weighting the criteria

Each level in the hierarchy of criteria and sub-criteria (and sub-sub-criteria, etc.), as represented in a ‘value tree’, can be analyzed as a separate decision problem (and then combined multiplicatively).

For each level, the criteria are pairwise compared and their “intensity of importance” relative to each other is expressed on the above-mentioned 1-9 ratio scale. Weights are calculated from ratios using eigenvalue analysis.

SMARTER

Scoring alternatives on the criteria

SMARTER is not usually used for scoring alternatives on criteria.

Weighting the criteria

The K criteria are ranked in order of their importance. The most-important criterion gets a value of 1, the second-most important criterion gets a value of 2, and so on down to a value of K for the least-important criterion.

Weights for the criteria are calculated using this formula: wk=(1/K)∑(1/i), summed across i=k to K, where wk is the weight of the kth-ranked criterion, k = 1,2, . . . K.

For example, with four criteria, applying the formula above, the weights are: 0.52, 0.27, 0.15, 0.06.

Other, similar methods based on rank orders also exist (Riabacke, Danielson & Ekenberg 2012).

Swing weighting

Scoring alternatives on the criteria

Swing weighting is not used for scoring alternatives on criteria.

Weighting the criteria

For each criterion, the effects of a ‘swing’ in performance from the worst to the best possible performance is evaluated. The criterion judged to be the most important in terms of the swing gets 100 points.

The second-most important criterion is identified and is assigned points relative to the 100 points for the most important criterion. The exercise is repeated for the remaining criteria. Weights for the criteria are calculated from ratios of the points.

Bisection method

Scoring alternatives on the criteria

For each criterion, the lowest- and highest-performing alternatives are identified and are rated 0 and 100 respectively. The performance on the criterion that is halfway between the two extremes is therefore worth 50 (midway between 0 and 100) and is defined.

The next two midpoints on the criterion relative to the performances worth 0 and 50 and 50 and 100 respectively, are then defined. These two endpoints and three midpoints are usually sufficient to trace out the approximate shape of the criterion’s ‘value function’.

Weighting the criteria

The bisection method is not used for weighting criteria.

Indifference method

Scoring alternatives on the criteria

For each criterion, the range of possible values for the performance on the criterion is divided into, say, four or five equal intervals (this method is for criteria that are quantitatively measurable and monotonic).

The intervals are ranked in order of importance, thereby indicating the shape of the ‘value function’ (e.g. concave or convex) so it can be traced out approximately.

Weighting the criteria

The difference method is not used for weighting criteria.

Indirect methods

PAPRIKA

Scoring alternatives on the criteria, and Weighting the criteria

All pairs of real or hypothetically possible alternatives defined on two or more criteria at a time (where each criterion has performance levels) and involving a trade-off are pairwise ranked.

Each time a decision-maker ranks a pair of alternatives, all other pairs that can be pairwise ranked via transitivity are identified and eliminated, thereby minimizing the number of pairwise rankings explicitly performed. Criteria weights and scores for the criteria’s levels are determined from the explicitly ranked pairs using mathematical methods based on linear programming.

You can easily experience PAPRIKA for yourself by creating a 1000minds software account (other methods are available from other MCDM software).

Conjoint analysis (or DCE), regression-based

Scoring alternatives on the criteria, and Weighting the criteria

‘Choice sets’ comprising two or more hypothetical alternatives defined on the criteria are ranked, or the top-ranked alternative is identified. Usually, multiple decision-makers are surveyed individually with respect to different choice sets.

Weights for the criteria are calculated from the aggregated rankings across all participants using regression techniques, such as multinomial logit analysis and hierarchical Bayes estimation.

Thus, as surveyed above, a range of scoring and weighting methods is available for users to choose from.

All methods have their relative strengths and weaknesses – such that choosing the ‘best’ MCDM method is itself a multi-criteria decision problem!

Issues to consider when thinking about which methods to use include (De Montis et al. 2004):

  • How well the methods elicit trade-offs between criteria
  • The time and resources required to implement alternative methods
  • The cognitive burden imposed on participants
  • Whether skilled facilitators are required
  • The need for additional data processing and statistical analysis
  • The validity of the underlying assumptions with respect to decision-makers’ preferences
  • Whether the outputs produced will satisfy decision-makers’ objectives

Step 6: Applying scores and weights to rank alternatives

Having scored the alternatives on the criteria and weighted the criteria, it is easy to calculate their ‘total scores’ (usually performed automatically by MCDM software).

With reference to the scoring and weighting methods mentioned above:

  • For the direct methods and, in effect, the PAPRIKA (indirect) method, each alternative’s scores on the criteria are multiplied by the weights, and then the weighted scores are summed across the criteria to get each alternative’s total score.
  • For the conjoint analysis (or DCE) method, the regression technique estimates each alternative’s value (or ‘utility’) or its probability of being preferred by decision-makers.

Step 7: Supporting decision-making

The MCDM results can be presented in tables or in graphs for decision-makers to review.

As for any analysis based on uncertain data inputs, it is important to check the robustness of the results via sensitivity analysis to plausible variations in alternatives’ performances (step 3), their scores on the criteria (step 4), and criteria weights (step 5).

In some applications, other considerations in addition to the MCDM results may also be relevant. For example, decisions with implications for budgets, such as project or investment decision-making, involve comparing the benefits of each alternative, as represented by their MCDM total score, relative to their costs (and perhaps also considering other factors, such as strategic or legal factors). For example, a value for money chart can be used.

Finally, it is worthwhile repeating, as mentioned at the beginning of the article, that MCDM is intended to serve as a tool to help people, individually or in groups, to reach a decision – their decision (made by humans), not the tool’s decision.

Therefore, decision-makers need to understand the MCDM results, including any significant limitations of the analysis, and use the results, to a greater or lesser extent, to support them in reaching their decision. Where appropriate, the MCDM results can be used to communicate and to justify the final decision to stakeholders.

Conclusions about MCDM / MCDA

As explained and illustrated in the article, MCDM (Multi-Criteria Decision-Making, also known as Multi-Criteria Decision Analysis, MCDA) is about methods and software for making decisions when multiple criteria (or objectives) need to be considered together, in order to rank or choose between the alternatives being evaluated.

MCDM – often supported by specialised MCDM software – is well suited to supporting decision-making by individuals, groups and organizations in literally many 1000s of possible applications.

Most MCDM applications are based on creating and applying weighted-sum models (also known as points systems), which involves the explicit weighting of criteria and scoring of alternatives.

In general terms, ‘good practice’ when implementing weighted-sum models (points systems) includes:

  • Carefully structuring the decision problem being addressed
  • Ensuring that appropriate criteria are specified
  • Measuring alternatives’ performance accurately
  • Using valid and reliable methods for scoring alternatives on the criteria and weighting the criteria,
  • Presenting the MCDM results, including sensitivity analysis, to decision-makers to support their decision-making

Other MCDM / MCDA resources

Here is a collection of recommended readings that we have found useful in courses we have taught.

Groups and websites

Wikipedia articles

Books

Journals

References

J Amoyal (2018), “Decision analysis”, OR/MS Today 45.

F Barron & H Person (1979), “Assessment of multiplicative utility functions via holistic judgments”, Organizational Behavior & Human Decision Processes 24, 147-66.

V Belton & T Stewart (2002), Multiple Criteria Decision Analysis: An Integrated Approach, Kluwer.

P Bottomley, J Doyle & R Green (2000), “Testing the reliability of weight elicitation methods: Direct rating versus point allocation”, Journal of Marketing Research 37, 508-13.

J-P Brans & B Mareschal (1994), “The PROMCALC & GAIA decision support system for multicriteria decision aid”, Decision Support Systems 12, 297-310.

A De Montis, P De Toro, B Droste-Franke, I Omann & S Stagl (2004), “Assessing the quality of different MCDA methods”, In: M Getzner, C Spash & S Stagl (Eds), Alternatives for Environmental Valuation, Routledge.

J Dyer (1973), “A time-sharing computer program for the solution of the multiple criteria problem”, Management Science 19, 1379-83.

W Edwards (1977), “How to use multiattribute utility measurement for social decision making”, IEEE Transactions on Systems, Man & Cybernetics 7, 326-40.

W Edwards & F Barron (1994), “SMARTS and SMARTER: Improved simple methods for multiattribute utility measurement”, Organizational Behavior & Human Decision Processes 60, 306-25.

B Franklin (1973) [1772], “To Joseph Priestley”, In: W Willcox & D Bridgwater (Eds), The papers of Benjamin Franklin: January 1 through December 31, 1772, Yale University Press.

P Green, A Krieger & Y Wind (2001), “Thirty years of conjoint analysis: Reflections and prospects”, Interfaces 31(Suppl 3), S56-S73.

DC Hadorn & the Steering Committee of the Western Canada Waiting List Project (2003), “Setting priorities on waiting lists: Point-count systems as linear models”, Journal of Health Services Research & Policy 8, 48-54.

P Hansen & N Devlin (2019), “Multi-Criteria Decision Analysis (MCDA) in health care decision making”, In: Oxford Research Encyclopedia of Economics and Finance, Oxford University Press.

P Hansen, A Hendry, R Naden, F Ombler & R Stewart (2012), “A new process for creating points systems for prioritizing patients for elective health services”, Clinical Governance: An International Journal 17, 200-209.

P Hansen & F Ombler (2008), “A new method for scoring multi-attribute value models using pairwise rankings of alternatives”, Journal of Multi-Criteria Decision Analysis 15, 87-107.

R Hastie & R Dawes (2010), Rational Choice in an Uncertain World. The Psychology of Judgement and Decision Making, Sage Publications.

D Kahneman (2011), Thinking, Fast and Slow, Farrar, Straus and Giroux.

R Keeney & H Raiffa (1993), Decisions with Multiple Objectives: Preferences and Value Tradeoffs, Cambridge University Press.

M Köksalan, J Wallenius & S Zionts (2011), Multiple Criteria Decision Making: From Early History to the 21st Century, World Scientific.

P Meehl (1954), Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence, University of Minnesota Press.

B Roy (1991), “The outranking approach and the foundations of ELECTRE methods”, In: C Bana e Costa (Ed), Readings in Multiple Criteria Decision Aid, Springer.

T Saaty (1977), “A scaling method for priorities in hierarchical structures”, Journal of Mathematical Psychology 15, 234-81.

P Vincke & J-P Brans (1985), “A preference ranking organization method: The PROMETHEE method for multiple criteria decision-making”, Management Science 31, 647-56.

D Von Winterfeldt & W Edwards (1986), Decision Analysis and Behavioral Research, Cambridge University Press.

H Weistroffer & Y Li (2016), “Multiple criteria decision analysis software”, In: S Greco, M Ehrgott & J Figueira (Eds), Multiple Criteria Decision Analysis: State of the Art Surveys, Springer.

S Zionts (1979), “MCDM – If not a Roman numeral, then what?”, Interfaces 9, 94-101.