How you write your UX test tasks can make or break the quality of the insights you get from your UX testing.
Here are 8 pro tips from our team of UX researchers, who’ve designed thousands of tasks for our biggest and most complex projects.
1. Set a scenario
Don't just dump users straight into your tasks—set a scenario for users to follow to encourage more authentic behaviour.
Effective: “Imagine it’s time to renew your home insurance and a friend has sent you a link to this site...”
Less effective: “Go to the website and complete the following tasks…”
To add a scenario to your test design, simply navigate to the User Instructions menu (Start a Test > Test Script)...
...and choose Set a Scenario to add this to your test design.
2. Focus on tasks — NOT opinions
Remote UX testing is most effective when you set tasks for users to complete and hear their in-the-moment thoughts. Asking survey-style questions is tempting, but will result in less useful data.
Effective: “Imagine this is a gift for a friend, and they may want to return it. What is the returns policy?”
Less effective: “Out of 10, where 1 is poor and 10 is great, how would you rate the returns policy?”
To add this to your test design, simply select the Task instruction type from the User Instructions menu.
3. You don’t (always) need to start users off on your homepage
When designing your tasks, think about where users might naturally start their journey – it’s not always from your homepage. Consider starting users off on a landing page or even from a Google search to make things more realistic.
4. Aim to have tests last around 20 minutes
There’s no firm rule here, but avoid cramming as much as possible into your tasks. Write tasks that take roughly 15 minutes to complete, allowing space for the average person to take up to 20 minutes to complete.
Tip: Users naturally get bored after roughly 20 minutes, so the quality of insights you get starts to rapidly decline after this point.
5. Ask for dummy data
Your tasks might require users to complete forms which include personal data. Since you can see what they’re typing in videos, it’s worth asking them to use data that’s close to the truth, but not accurate.
Example: “You may need to complete a form and share personal data. Please use fake data that’s close to your real information e.g. use your real first name, but a fake surname; correct year of birth, fake day and month; actual postcode, fake house number.”
Alternatively, specify the dummy data that a user should enter—this will make it easier for you to identify and remove this dummy data from your system if necessary.
Example: “Please fill in the form, using the email firstname.lastname@example.org”
6. Know the difference between directed and undirected tasks
You can observe the most natural behaviour possible using a self-generating (or undirected) task. Ask users to think of something they’d genuinely want to buy or do on a website, then see how they go about achieving that.
Example: “Think of something you’d genuinely buy from this site. It could be for yourself or a gift for a friend. Say out loud what it is and then buy it.”
This is in contrast to a more directed approach where you tell users to follow a specific path or funnel, because you’re after insights to help fix issues in that area. Users don’t know what you want or expect, so try to be as comprehensive as possible about what they need to do.
Example: “Using only the menu bar, find a list of size 12 black dresses for under £50.”
7. Please don't ask people to buy things!
Unless we tell users to pay for an item, they won’t. So when you write tasks, tell users to stop before entering their payment details.
Effective: “Please follow the checkout process, but STOP before you add your card details.”
Where you need a purchase journey to be completed, you can supply dummy card details for users to pay with.
8. Preview your test before launching!
The people taking your test only see one instruction at a time, and can't move back to previous tasks, so it's important to preview your test before launching it to make sure it makes sense.
Using the Preview function (Start a Test > Test Script) will allow you to step through your instructions one-by-one, and see your tasks as your users will see them.
Alternatively, you can launch a dry run of your tests (with one or two users) to check whether your tasks give you the kinds of insights you need. You can then refine your tasks (if necessary) before running a larger study.