Skip to main content

What is conjoint analysis?

An introduction to conjoint analysis

Introduction

Imagine you are buying a new smartphone. Do you care more about the price, the camera quality or the battery life? If you had to choose, would you pay $200 more for a better camera? Would you sacrifice camera quality for a battery that lasts two days?

We all face trade-offs like this every day. Conjoint analysis is used to shed light on how we make such decisions.

What is conjoint analysis?

Conjoint analysis is a powerful survey-based method for understanding how people evaluate alternatives. It reveals which features, or “attributes”, matter most to people and quantifies the trade-offs they’re willing to make when choosing between options, such as consumer products and government policies.

Why not just ask people what they value directly? If you ask people what they want in a new phone, they will usually say “I want the lowest price, the best camera and the longest-lasting battery.” Given feature trade-offs are technically inevitable, such obvious information is worthless for designing real products.

Instead, researchers use conjoint analysis surveys to ask participants to choose between realistic, hypothetical options. By analyzing their choices, the otherwise hidden relative importance of each attribute is revealed.

Today, conjoint analysis is one of the most popular quantitative methods in market research for new product design and feature testing. It’s also widely used in the government sector to support policy-making and is a staple in academic research.

Though conjoint analysis usually involves complex quantitative techniques, modern specialized software like 1000minds makes conjoint analysis accessible to everyone.

Intended audience and what’s in this article

Whether you are new to conjoint analysis, feeling a bit rusty or planning to run your first study, this article will give you a solid, easy-to-understand foundation.

This article is also for people interested in discrete choice experiments (DCE). As we will explain later on, a DCE is a type of conjoint analysis that is especially popular in academic and policy-oriented settings.

In this introductory article, we keep things broad and practical, covering:

  • Key concepts and terminology
  • How a conjoint survey works
  • Types of conjoint analysis available
  • Benefits and challenges of conjoint analysis
  • Common examples of applications

Looking for something more advanced? Once you have the basics down, you might like to check out our deeper dive into conjoint analysis. This second article more fully explains the methods outlined in the first article and walks you through a detailed, step-by-step example – from specifying attributes and how to run a conjoint analysis survey, including recruiting participants, through to interpreting results.

A 4-minute introduction to conjoint analysis & DCE

What is conjoint analysis?

How conjoint analysis works

Conjoint analysis simulates real-world decision-making by asking people to choose between alternatives represented as combinations of attributes. By observing people’s choices, which attributes are most important and the trade-offs people are willing to make are revealed.

Key concepts and terminology

  • Attributes: The features or characteristics of the products (or government policies) being studied, e.g. price, quality, delivery time.

    For most applications, fewer than a dozen attributes is usually sufficient though more are possible, and four to eight is typical.

  • Levels: The categories or particular values each attribute can take, e.g. $10, $20, $30 for price.

    Attributes usually have different numbers of levels – ranging from three (though two is also possible) up to seven or more, as appropriate for the products being represented on the attributes.

  • Alternatives, also known as profiles or concepts: The underlying objects of interest – products or policies – represented on the attributes.

    The number of alternatives can range from two to a dozen, hundreds or even thousands, including potentially all possible combinations of the levels on the attributes.

  • Choice sets: Two or more hypothetical alternatives (or profiles/concepts) presented to participants – in a “choice set” – for them to choose from, e.g. a high-quality product for $30 versus a low-quality product for $10.

    Depending on the conjoint analysis method used, participants are typically shown 10 to 30 choice sets each.

  • Trade-offs: As revealed by people’s choices, the compromises, or exchanges, between the attributes they are willing to make, e.g. paying a higher price for a higher-quality product.
  • Preference measurement – utilities: From people’s choices, the relative importance of the attributes, quantified as weights, or part-worth utilities, is revealed, e.g. quality’s weight = 0.3 and price’s weight = 0.15; and thus, quality is twice (0.3/0.15) as important as price.
  • Market simulations: Based on people’s utilities, their purchasing (or other) choices can be modeled, resulting in market-share predictions, e.g. product A’s market share = 43%, product B’s = 36%, C’s = 11%, etc.

How preferences are elicited

Most contemporary conjoint analysis methods involve participants being repeatedly shown “choice sets” containing two or more hypothetical alternatives defined on different combinations of attribute levels, often involving trade-offs, and asked to choose their preferred alternative.

Such choices between alternatives are intended to mirror real-life decision-making, such as choosing between product options, so that it is intuitive and feels natural for participants.

In contrast, early forms of conjoint analysis, nowadays referred to as “traditional” conjoint analysis, were based on less intuitive or natural methods for eliciting people’s preferences, such as rating or scoring scales – e.g. rating one attribute relative to another on a nine-point scale ranging from “equally preferred” to “extremely preferred”.

The simplest possible example of a choice set appears in Figure 1. It’s the simplest because it contains just two alternatives defined on just two attributes and involving a trade-off (e.g. where the other attributes in the conjoint analysis are assumed to be the same in each alternative). Choice sets with more alternatives and attributes are considered below.

Figure 1: Example of a choice set for a conjoint analysis into electric cars
Which electric car do you prefer?
Driving range
250 miles
Features
basic
This one
Driving range
200 miles
Features
premium
This one
They are equal
Which electric car do you prefer?
Price
$30,000
Warranty
3 years
This one
Price
$35,000
Warranty
5 years
This one
They are equal
Which electric car do you prefer?
Driving range
250 miles
Warranty
3 years
This one
Driving range
200 miles
Warranty
5 years
This one
They are equal

Why is this methodology based on choice sets called “conjoint analysis” or a “discrete choice experiment” (DCE)?

  • Because the alternatives that people are asked to choose between are defined on two or more attributes at a time, such that they are, in effect, joined or conjoined (like conjoined twins) – hence conjoint analysis!
  • Because participants in a DCE are asked to make a series of discrete, or distinct, choices in an experimental, or controlled, setting – a discrete choice experiment (DCE)!

Most conjoint analysis methods restrict their choice sets to just two alternatives. However, some methods’ choice sets have more alternatives, e.g. imagine Figure 1 with three or more alternatives instead of just two.

Choosing from choice sets with just two alternatives is the same as pairwise ranking, which is a cognitively simpler and faster cognitive activity than choosing from three or more alternatives. For more information, check out our guide to pairwise comparisons.

As well as the number of alternatives in choice sets, another differentiator of conjoint analysis methods is the number of attributes for representing the alternatives (or profiles) in the choice sets. In this respect, there are two types of choice set: full profile and partial profile.

Full-profile choice sets are a traditional approach where each option in the choice sets is defined on all attributes included in the conjoint analysis, e.g. potentially, a dozen attributes together. An example of a full-profile choice set with six attributes appears in Figure 2.

Figure 2: Example of a full-profile choice set for choosing a vacation package
Which of these two vacation packages would you prefer to book?
Flight class
Economy
Price (per person)
$1,000
Attractions
Extensive (guided tours, special events)
Travel time (one-way)
2–4 hours
Accommodation
Standard 3-star hotel
Cuisine
Diverse
This one
Flight class
Business
Price (per person)
$2,000
Attractions
Some (3–5 local sites)
Travel time (one-way)
9+ hours
Accommodation
Luxury 5-star resort or boutique stay
Cuisine
Limited options
This one
They are equal
Which of these two vacation packages would you prefer to book?
Flight class
Premium economy
Price (per person)
$800
Attractions
Many (6+ sites including tours)
Travel time (one-way)
5–8 hours
Accommodation
Premium 4-star hotel
Cuisine
Gourmet experience
This one
Flight class
First class
Price (per person)
$1,800
Attractions
Few (1–2 local sites)
Travel time (one-way)
Under 2 hours
Accommodation
Basic hostel or motel
Cuisine
Standard
This one
They are equal
Which of these two vacation packages would you prefer to book?
Flight class
Business
Price (per person)
$2,000
Attractions
Some (3–5 local sites)
Travel time (one-way)
9+ hours
Accommodation
Luxury 5-star resort or boutique stay
Cuisine
Gourmet experience
This one
Flight class
Economy
Price (per person)
$700
Attractions
Many (6+ sites including tours)
Travel time (one-way)
2–4 hours
Accommodation
Standard 3-star hotel
Cuisine
Diverse
This one
They are equal

To reduce cognitive complexity for participants, some conjoint analysis methods are based on partial-profile choice sets (instead of full-profile choice sets).

Partial-profile choice sets are defined on a subset of the attributes – e.g. just two or three (or more) – where, in effect, the levels on the excluded attributes are treated as the same for the two profiles in the choice set: i.e. “all else being equal”.

The simplest possible partial-profile choice set – the cognitively easiest and fastest for people to choose from – has just two attributes (Figure 1 above), where each choice set comprises a different pair of attributes drawn from the full complement included in the conjoint analysis.

After people’s preferences have been elicited – e.g. across 10 to 30 choice sets per person, depending on the conjoint analysis method used – the relative importance of the attributes, quantified as weights, or part-worth utilities, is determined.

Conjoint analysis methods differ in how utilities are calculated. Some summarize preferences at the aggregate or group level, whereas other methods estimate, to varying degrees of accuracy, individual utilities for each participant, revealing the variation, or heterogeneity, in people’s preferences.

Conjoint analysis example: Preferences for electric cars

Imagine running a conjoint analysis surveying a few hundred potential customers to discover their preferences for electric cars.

Each participant in the conjoint survey is presented with a series of choice sets representing hypothetical electric cars and asked to choose which option (car) they prefer (Figure 3). Depending on the method used, 10 to 30 choice sets, comprising different combinations of attribute levels, are presented to each participant.

To better relate to the example, you might like to experience the electric car conjoint survey for yourself.

Figure 3: Example of a full-profile choice set for a conjoint analysis into electric cars
Which electric car do you prefer?
Driving range
250 miles
Warranty
3 years
This one
Driving range
200 miles
Warranty
5 years
This one
They are equal
Which electric car do you prefer?
Driving range
450 miles
Warranty
4 years
This one
Driving range
300 miles
Warranty
6 years
This one
They are equal

From the participants’ choices – at an aggregate or individual participant level (again depending on the method) – the relative importance, or weight, of the attributes and the levels within each attribute, reported as part-worth utilities, are determined (Figure 4 and Table 1).

As revealed by the weights in Figure 4 and Table 1 – where, for simplicity, these weights are at the aggregate level across all participants, on average (means) – the most important attribute is range (32%), followed by price (26.1%), then warranty (24.2%), and the least important is features (17.6%).

It can also be said that “range’s importance is 32%”, “price’s importance is 26.1%” (and so on for the other attributes).

As also reported in Table 1, for each attribute the utilities for levels of performance below the highest-ranked level are less than the overall weight: e.g. a “450 miles” range has a utility of 27%, which is lower than the 32% utility for the highest-ranked “500 miles” range (corresponding to the attribute’s weight).

Figure 4: Relative importance, or weight, of the attributes (summing to 100%)
Donut chart of attribute weights
Table 1: Part-worth utilities (% weights)
Range Utility Bar graph
200 miles 0.0%
250 miles 6.1%
300 miles 12.2%
350 miles 17.0%
400 miles 21.9%
450 miles 27.0%
500 miles 32.0%
Price
$45,000 0.0%
$40,000 7.1%
$35,000 14.3%
$30,000 20.2%
$25,000 26.1%
Warranty
3 years 0.0%
4 years 6.0%
5 years 12.0%
6 years 18.1%
7 years 24.2%
Features
basic 0.0%
premium 17.6%

Also, dividing one attribute’s weight by another attribute’s weight reveals the relative importance of the two attributes (Table 2). For example, range is 1.2 times (32% / 26.1%) as important as price; and, conversely, price is 0.8 (26.1% / 32%) as important as range.

Table 2: Attribute relative importance
Range Price Warranty Features
32.0% 26.1% 24.2% 17.6%
Range 32.0% 1.2 1.3 1.8
Price 26.1% 0.8 1.1 1.5
Warranty 24.2% 0.8 0.9 1.4
Features 17.6% 0.6 0.7 0.7

The utilities (Table 1) can be used for ranking any alternatives being considered – based on their level ratings on the attributes – according to the alternatives’ “scores”, calculated by adding up for each alternative the relevant utilities for its levels on the attributes (Table 3).

Design A’s score of 73.1%, for example, is calculated by adding up the mean utilities for “350 miles”, “7 years”, “premium” and “$35,000”: 17 + 24.2 + 17.6 + 14.3 = 73.1.

In summary, conjoint analysis transforms subjective preferences into actionable utilities and market share predictions.

Table 3: Ranked electric-car designs
Design A 1st 73.1%
350 miles 7 years premium $35,000
Design D 2nd 71.9%
400 miles 6 years premium $35,000
Design C 3rd 68.8%
500 miles 5 years premium $40,000
Design B 4th 66.8%
350 miles 4 years premium $25,000
Design E 5th 65.3%
450 miles 6 years basic $30,000
Design H 6th 60.0%
400 miles 5 years basic $25,000
Design G 7th 32.4%
300 miles 3 years basic $30,000
Design F 8th 24.2%
250 miles 6 years basic $45,000

Finally, if utilities at the individual participant level are available (as in our deeper dive into conjoint analysis), they can be used for market simulations of consumer market shares, including “what-if” analysis – e.g. Table 4 shows what would happen to Design E’s market share if its warranty were extended, its features enhanced and its price raised.

Table 4: Market-share predictions after changes to Design E
Name Market shares
Participants’ 1st choice (n=120)
Attributes
Before After Change Range Warranty Features Price
Design A 31.7% 17.9% -43.4% 350 miles 7 years premium $35,000
Design B 17.1% 16.3% -4.9% 350 miles 4 years premium $25,000
Design C 22.1% 16.7% -24.5% 500 miles 5 years premium $40,000
Design D 12.5% 8.8% -30% 400 miles 6 years premium $35,000
Design E 10% 32.1% +220.8% 450 miles 6 years → 7 years basic → premium $30,000 → $40,000
Design F 0% 0% 0% 250 miles 6 years basic $45,000
Design G 0% 0% 0% 300 miles 3 years basic $30,000
Design H 6.7% 8.3% +25% 400 miles 5 years basic $25,000
Total 100% 100% 0%
Simulated market shares – chart

Try 1000minds conjoint survey examples

Want to experience a conjoint analysis survey for yourself?

Please try the conjoint surveys below created using 1000minds conjoint analysis software (which you are very welcome to try out too).

The first survey is about electric cars and is where the data used to illustrate the conjoint analysis example above comes from. The second is a more light-hearted example to help you to choose a breed of cat as a pet!

The conjoint analysis process

Most conjoint analysis processes involve the following six steps.

Key components, especially (2) specifying the attributes, (4) running the survey and (6) interpreting the results, including market simulations, are covered in detail in our deeper dive into conjoint analysis.

  1. Define the research question
    What decision are you trying to inform: e.g. designing a product, setting a price or shaping a public policy?
  2. Specify the attributes
    For the product/policy you are studying, identify relevant attributes and levels within each attribute capable of representing potential alternatives. Be careful to get the wording right!
  3. Design the choice sets
    Some methods require careful pre-specification of choice sets (to get an efficient “fractional factorial design”). Other “adaptive” methods such as 1000minds’ PAPRIKA method have no such design issues because their choice sets are automatically generated in real time.
  4. Run the survey
    Find participants and distribute the survey via an online platform (which simplifies survey delivery and improves participant engagement).
  5. Analyze the data
    If you are using specialized conjoint analysis software, then you can probably rely on its automated reporting of individual or aggregate level preferences. Otherwise, you will have to perform your own analysis (do you have – or have access to – the requisite technical skills?).
  6. Interpret the results
    Use your findings to address your over-arching research question (e.g. designing a product, setting a price or shaping a public policy).

An optional extra stage is to run “what-if” market simulations. By applying participants’ utilities, you can predict market shares for your product relative to competitors’ products and experiment with feature changes to see how demand responds.

Types of conjoint analysis

Conjoint analysis – also variously referred to as choice modelling, trade-off analysis and a discrete choice experiment – has a rich methodological history covering more than a century.

The foundations of conjoint analysis can be traced back to the early 20th century when psychologist Louis Thurstone introduced the idea that people’s observed choices reflect comparisons of their underlying, latent (unobservable) “utilities”, or measures of happiness (Thurstone 1927).

These ideas were formalized and extended through developments in mathematical psychology, economics, statistics and market research, culminating in the modern theories and methods of conjoint analysis.

A particularly notable milestone was the 2000 Nobel Prize in Economic Sciences being jointly awarded to Daniel McFadden for developing the theoretical foundations of modern discrete choice modeling (McFadden 1974).

Jordan Louviere and colleagues unified the conjoint analysis and discrete choice literatures. They showed that Choice-Based Conjoint (CBC) methods are formally equivalent to Discrete Choice Experiment (DCE) models estimated from experimental data (Louviere, Hensher & Swait 2000), both as outlined below.

Today, conjoint analysis can be regarded as a family of methods for revealing people’s preferences that differ in how those preferences are elicited and how utilities are inferred. Popular methods in use today are outlined below and summarized in Table 5.

A more detailed discussion of the rich history of conjoint analysis signposted above and fuller explanations of the methods outlined below are in our deeper dive into conjoint analysis.

Rating- and ranking-based conjoint analysis (traditional conjoint)

What’s become known as “traditional conjoint analysis” elicits preferences by asking people to evaluate alternatives by rating or ranking them and analyzing how different features contribute to those evaluations.

  • How it works: Participants rate or rank a series of full-profile alternatives, typically evaluating several alternatives per task rather than choosing from contemporary choice sets. Regression-based statistical methods are used to decompose these evaluations into utilities.
  • Why it’s popular: Traditional conjoint analysis remains in use due to its simplicity, transparency and long-standing acceptance in applied research.

Choice-Based Conjoint (CBC) / Discrete Choice Experiments (DCE)

Choice-Based Conjoint (CBC), which is methodologically the same as Discrete Choice Experiments (DCE), elicits people’s preferences by asking them to repeatedly choose from a series of choice sets.

  • How it works: Choice sets typically contain three to five full-profile alternatives, often including a “none” or “status quo” option. The resulting choice data is analyzed using statistical estimation methods to derive utilities.
  • Why it’s popular: CBC/DCE aligns with economic theory and observed choice behavior, making it a common method in economics, health, transport and environmental research, for example.

Adaptive Choice-Based Conjoint (ACBC)

Adaptive Choice-Based Conjoint (ACBC) extends Choice-Based Conjoint (CBC) by using preliminary preference-elicitation tasks to tailor the choice sets presented to each participant.

  • How it works: Preliminary tasks include specifying a preferred combination of attribute levels (“build-your-own”) and screening out of unacceptable options, from which full-profile choice sets are tailored for each participant.
  • Why it’s popular: ACBC extends CBC to more complex products and applications with more attributes and levels, where standard choice sets would be inefficient or overly burdensome.

Best-Worst Scaling (MaxDiff)

MaxDiff asks participants to choose their most- and least-preferred alternatives from choice sets containing three to six partial-profile alternatives.

  • How it works: From the person’s most- and least-preferred choices for each choice set, their implied pairwise rankings of the choice set’s other alternatives are inferred and also used for estimating utilities.
  • Why it’s popular: MaxDiff is efficient in the sense that it reduces the number of decisions required to determine multiple pairwise rankings (from which utilities are derived).

PAPRIKA method

PAPRIKA – an acronym for Potentially All Pairwise RanKings of all possible Alternatives – asks participants to choose from choice sets involving pairwise trade-off comparisons between attributes.

  • How it works: The choice sets comprise partial-profile alternatives defined on only two attributes at a time, though more attributes are also possible. Each participant’s choice sets are selected adaptively, with new ones based on earlier choices.
  • Why it’s popular: PAPRIKA identifies a stable, individual-level utility function for each participant, and is well suited to applications where transparency, easy interpretability and low responder burden are important.
Table 5: Types of conjoint analysis

Method

Key advantage

Used for

Traditional (rating/ranking)

Relatively simple to set up and run for beginners or DIYers

Studies with relatively few attributes, e.g. 3-6

Choice-Based (CBC/DCE)

Realistic choice tasks with solid theoretical foundations

All kinds of market research and policy-making

Adaptive (ACBC)

Customized choice sets that reduce responder burden by “adapting”

Studies with many attributes, e.g. > 10

MaxDiff

Efficient elicitation of pairwise rankings of alternatives

Basic studies with few attributes and levels

PAPRIKA (1000minds)

Individual utilities generated from simple pairwise trade-offs

All kinds of applications, including complex ones

Though the five methods outlined above are all different, they all produce fundamentally the same outputs: utilities representing the relative importance of attributes and their levels, alternative rankings and total utility scores.

The five methods are explained in more detail – in particular, how participants’ preferences are elicited and how utilities are inferred from those elicited preferences – in our accompanying deeper dive into conjoint analysis.

Benefits and challenges of conjoint analysis

Conjoint analysis is one of the most widely used quantitative methods in market research because it offers important advantages over other survey methods for tapping into people’s preferences:

  • Realistic decision modeling: Simulates the kinds of trade-offs faced by people when making choices in real life.
  • Quantitative insights: Reveals the relative importance of specific attributes, quantified as weights, or part-worth utilities.
  • Actionable data: Helps prioritize specific product features or improvements and policy-making options, including “what-if” scenario testing.
  • Broad applicability: Works across a wide range of industries, audiences and use cases.

Conjoint analysis also complements other methods of data collection, such as focus groups or satisfaction surveys, by capturing structured preferences data based on people’s choices.

Conjoint analysis also comes with potential challenges:

  • Conceptual understanding: Participants need to be familiar with the attributes in the choice sets and understand what they are choosing between.
  • Responder burden: Surveys with lots of attributes presented together – i.e. full-profile instead of partial-profile choice sets – can be cognitively difficult to complete.
  • Survey design complexity: The choice sets for some conjoint analysis methods need to be carefully pre-specified (“fractional factorial design”).
  • Model assumptions: Some statistical models (e.g. multinomial logit) for estimating participants’ utilities are based on assumptions that are not always appropriate.

What is conjoint analysis used for?

Conjoint analysis is used for many types of applications across a wide range of industries worldwide. Here are some common examples of conjoint analysis.

Market research

  • Uncover what matters most to customers and identify segments for targeting
  • Test new product concepts and messaging
  • Predict market shares and simulate demand under various “what-if” scenarios

Example:

A beverage company tests different combinations of flavor, packaging and price. Conjoint analysis in marketing research helps identify which attributes influence customer preferences the most and how to position the product in the market.

1000minds has been used for many market research studies – e.g. as documented in many journal articles.

Product design and development

  • Identify optimal configurations of product features
  • Refine pricing strategies
  • Understand feature trade-offs that drive demand

Example:

A wearable tech company uses conjoint analysis to learn about how customers value features like battery life, weight, price and brand – e.g. revealing that customers mainly prioritize battery life and price, with these insights driving product development and marketing.

Health care

  • Capture patient and clinician preferences to improve service delivery
  • Guide resource allocation and clinical decision-making
  • Support regulatory and HTA (health technology assessment) submissions and assessments

Example:

Using conjoint analysis, a hospital evaluates patient preferences for appointment characteristics: virtual or in-person, time of day, provider gender and wait times. Results inform scheduling and service design to improve patient satisfaction and efficiency.

1000minds has been widely used in the health sector, including for developing tools for prioritizing patients, evaluating health technologies and guiding resource allocation; see our healthcare decision-making page for more information.

Public sector

  • Engage citizens in policy design and decision-making
  • Evaluate trade-offs in infrastructure, taxation and environmental programs, for example
  • Prioritize government spending based on public values in pursuit of value for money

Example: A transportation agency uses conjoint analysis to assess citizens’ priorities for a new rail project. The conjoint study highlights that travel time and environmental impact are more important than ticket price.

1000minds has helped governments around the world make public-sector decisions that are valid, inclusive and defensible – e.g. see our coastal erosion success story.

Organizational management

  • Understand employee preferences for workplace benefits
  • Design cost-effective, preferred compensation and benefit packages
  • Support internal policy decisions with transparent, data-driven inputs

Example:

An organization conducts a conjoint study to learn how their employees value various benefits: pay vs vacation time vs cafeteria quality vs health coverage, etc. The results help HR design a benefits package maximizing employee satisfaction within budget.

1000minds has supported large organizations to elicit and use preferences data to design benefits packages that align with staff priorities and organizational fiscal goals – read a success story.

Research and academia

  • Conduct research based on conjoint analysis, discrete choice experiments (DCE) and multi-criteria decision analysis (MCDA)
  • Teach conjoint analysis, DCE and MCDA – loved by students!
  • Win grants and produce publishable, peer-reviewed outputs

Example:

A group of researchers apply for and win a grant to use conjoint analysis (DCE) to research citizens’ preferences for climate-change mitigation measures. Their research informs policy-making and results in several journal articles being published.

1000minds is used in 1350+ research projects at 820+ universities and research organizations worldwide, and is trusted by academics for both research and teaching.

Why use 1000minds for conjoint analysis?

Image of a chili pepper over an abstract representation of a pairwise choice question

1000minds is a globally recognized, award-winning software solution for conjoint analysis and decision-making. Trusted by governments, researchers, universities and organizations worldwide, 1000minds combines scientific validity with user-friendliness.

At the core of 1000minds is its PAPRIKA method – a patented and award-winning algorithm for adaptive conjoint analysis and multi-criteria decision analysis (MCDA). PAPRIKA asks simple pairwise-comparison questions that adapt as people answer them, minimizing responder burden and ensuring high-quality, reliable results.

What sets 1000minds apart?

1000minds has the following advantages relative to other conjoint analysis methods and software.

Quick-and-easy setup

Create conjoint surveys in minutes. Easily define your attributes, customize your survey – in any language and with images – and distribute to potentially 1000s of participants. Need help getting started? 1000minds’ AI Assistant can suggest attributes – and alternatives, if desired – to get you up and running quickly.

User-friendly for all

The intuitive interface and conversational question style make 1000minds easy for both administrators and participants. This user-friendliness ensures better engagement and higher quality data and completion rates than other conjoint analysis methods.

Automatic results and reporting

As responses come in, conjoint analysis outputs are automatically produced in real-time – no manual analysis is required. The platform generates clear, actionable results that are easy to interpret and share with stakeholders.

Scientific validity

1000minds is used at over 830 universities and research organizations around the world and regularly cited in peer-reviewed studies (see our 410⁠+ peer-reviewed publications). Its validity and reliability are widely recognized by both conjoint analysis academics and practitioners alike.

Award-winning innovation

1000minds has been recognized in 18 innovation awards, including the Consensus Software Award (sponsored by IBM and Microsoft) which praised 1000minds for “blending an innovative algorithm with a simple user interface to produce a tool of great power and sheer elegance.”

Table 6: Summary of 1000minds advantages
1000minds advantage What it means Why it matters
Adaptive Choice sets adapt in real-time based on the participant’s choices Personalized and efficient (no “design” requirements)
Multi-criteria decision analysis (MCDA) As well as conjoint analysis & DCE, 1000minds is used for MCDA Ideal for decision-making (as well as market research)
PAPRIKA method Pairwise trade-off comparisons in simple, adaptive format Low responder burden and high-quality data
Easy survey setup Intuitive survey builder with customizable design Launch studies quickly in any language or format
AI assistance For suggesting attributes and alternatives Get started quickly, and refine as you go
Ready-to-go models Large library of pre-built examples available Use as templates if desired – easy to adapt and share
Automated real-time analysis Built-in reporting and export tools No need for manual data or statistical analysis
High engagement Simple, conversational question style Higher response rates and more reliable results
Scientific validity Used at 830⁠+ universities and research organizations Trusted by academics, businesses and government users
Award-winning innovation Recognized in 18 software and innovation awards Proven technology with real-world impact
Free to try 15 days’ free trial, expert support available Experience 1000minds now – risk free!

Try 1000minds today

Explore how 1000minds can support your work by creating a free account or book a demo with one of our experts to learn more.

Want to learn more about conjoint analysis?

You might like to check out our deeper dive into conjoint analysis. This second article includes a survey of methods and a detailed example of a conjoint analysis. The example covers things to consider when specifying the attributes and levels in a conjoint analysis, how to run a conjoint survey, including recruiting participants, and how to interpret the results.

References

Questions & answers

What is the difference between attributes and levels?

Attributes are the specific features or characteristics being studied, such as price or quality. Levels are the various categories or specific values that each of those attributes can take, such as $10 or $20 for a price attribute.

Although more are possible, fewer than a dozen is usually sufficient, and four to eight attributes are typical for most applications.

Utilities are weights derived from a participant’s choices that quantify the relative importance of attributes and their levels. These weights are applied to generate the total “score” for specific product configurations by which they are ranked.

The PAPRIKA method (Potentially All Pairwise RanKings of all possible Alternatives) is based on pairwise comparisons of alternatives defined on just two attributes at a time presented adaptively. This means choice tasks are as cognitively simple as possible, resulting in valid and reliable results.

Yes, they are closely related methods for understanding trade-offs. The term “conjoint analysis” is typically used in market research and business, whereas “discrete choice experiment” is more common in academic and policy-oriented settings.

You can include up to 5000 people in a 1000minds survey. Read our guide summarizing common methods for finding 1000minds survey participants, ranging from emailing them to using a commercial survey panel. 1000minds integrates easily with survey panels, such as provided by global companies Cint, Dynata and PureSpectrum.

Yes, by applying participants’ utilities from a conjoint survey, you can run “what‑if” market simulations to predict market shares for your product relative to competitors’ products and test new product designs.

Got a different question? Ask us!

1000minds logo

Send us a message and we’ll reply soon!

Paul Hansen Petra Kelly-Voicu Franz Ombler

Please tick the box to continue

Something went wrong, please try again soon.

Success

Message received!
We’ll be in touch as soon as possible.

Contact us