After logging into your 1000minds account, all you have to do is create a ‘choice model’ and then run a ‘conjoint survey’ involving as many participants as you like.
(Another name for Conjoint Analysis is Choice Modelling or Discrete Choice Experiment.)
Creating a choice model
You can jump back and forth between the steps as much as you like. 1000minds is designed so that you can iteratively refine your model as you learn more about your decision problem by thinking about each step.
Step 1: Attributes
After having setup your model, start by entering your attributes, which can be in qualitative or quantitative terms.
As an illustration, consider the example of using conjoint analysis to help design a car. Conjoint Analysis reveals the relative importance to consumers of the main attributes that can be used to represent a car.
For example, these attributes, and levels within each attribute, might be appropriate: (a) Engine size (power), (b) Safety features, (c) Quality of interior finish, (d) Electronics accessories, (e) Overall look and aesthetics.
(Of course, if this were a real application, you would enter you own attributes or adapt one of our ‘demo models’ available inside 1000minds.)
Step 2: Concepts
Enter any car ‘concepts’ (i.e. combinations of attributes) you are considering. If you don’t know them yet, that’s fine. You can enter concepts at any time.
Steps 3-5: Choices, Part-worth utilities, and Ranked choices
Having drafted your choice model – its attributes and levels – you should probably test it by at least partially performing steps 3-5 (see the toolbar above again).
At the Choices step, you’ll be asked a series of simple questions based on choosing between two hypothetical concepts (‘cars’) defined on two attributes at a time and involving a trade-off. (These questions are fundamental to what is known as the PAPRIKA method.)
Depending on how many attributes are included in the model, you might be asked 30-40 questions, for example. Answering them doesn’t take very long for most people.
Your answers to these pairwise-ranking questions determine the ‘part-worth utilities’ of the attributes, representing their relative importance (weights) – reported at the Part-worth utilities step.
Finally, based on your part-worth utilities and how you rated any concepts under consideration at Step 2 earlier (or now if it’s more convenient), 1000minds ranks the concepts from first to last according to their ‘total utilities’ (scores out of 100).
In the process of at least partially undertaking these three steps – 3: Choices, 4: Part-worth utilities, 5: Ranked choices – you are likely to have learned more about your choice problem, inspiring you to iteratively refine your model’s attributes and levels (step 1) and concepts (step 2).
When your model is ‘good to go’, you are ready to run the conjoint survey.
Running a conjoint survey
A conjoint survey asks participants to answer a series of simple questions based on choosing between two hypothetical alternatives (‘cars’) defined on two attributes at a time and involving a tradeoff – like the screenshot above for the Choices step.
People’s answers reveal their individual ‘part-worth utilities’, and also on average for the group, representing the relative importance of the attributes (weights).
All you have to do is follow these simple steps. (If you’re running a survey you should probably have a look at our step-by-step guide to a conjoint survey.)
Step 1: Conjoint survey
After having chosen to undertake a conjoint survey at the activities step (for other 1000minds activities, see group decision-making), start by setting-up your survey.
Step 2: Email and webpage
Involve as many participants as you like – 10s, 100s or 1000s of people, depending on your application.
Simply enter people’s email addresses into 1000minds, and they’ll be invited to take part. People can also self-enrol from a sign-up webpage, which is great for ‘convenience’ or ‘snowball’ sampling.
Step 3: Participant progress
Check on participants’ progress in the survey, and send reminder emails to people who haven’t finished yet if appropriate.
Step 4: Results
Participants’ part-worth utilities are reported for each individual participant and also medians, means and standard deviations for the group. Also available (not shown below) is their rankings of attributes and concepts.
A participant’s part-worth utilities represent the relative importance of the attribute to him or her. Each attribute’s relative importance (weight) – i.e. relative to the other attributes – is represented by the (bolded) value of its highest-ranked level.
For example, in the table above Legolas values Engine size at 25.5% and Safety features at 12.8%. Therefore the first attribute is twice as important as the second one. The bolded values – weights – sum to 100% (i.e. 1).
In addition, an attribute’s utility value(s) between the lowest and highest levels represent both the attribute’s relative importance and the levels’ performances relative to the highest level – hence ‘middle’ values are less than bolded values.
(For more information, see interpreting preference values.)