Lessons in Task Design
Jared Spool’s latest post about task wording in usability tests got me thinking about the evolution of task wording in my usability tests through this semester:
What was happening was that we had guided the users with the wording of our task. By changing the wording, we saw different usability results. The design was the same, yet the results changed because of what we asked the users to do.
I saw similar results in my first usability tests for my RFP application. While I did get a lot of good information to work with, there were a couple of things that just felt fishy.
- Going in I was pretty concerned with the language used to describe the key data components of the application. Were “snippets” and “projects” too abstract? Yet no one had any trouble with the language.
- A couple of tasks seemed unduly challenging (saving a piece of data when there was a great big, green Add Snippet button right below the data fields.)
Turns out, my task design needed some refining. My tasks used the implementation language in the application, so participants were guided by the tasks to understand what the language mean. Conversely, because they had gotten used to the implementation language being used in previous tasks, when following tasks didn’t use that language, they had a hard time (e.g., clicking an Add Snippet button when the task asked them to save the snippet.)
After a lot of guidance, practice, and revisions on task wording in my user-centered design class, I was much better prepared to write tasks for my testing in that class and for the second round of testing of my RFP application.
For example, “search for [topic x]” became “determine if the items in this list already exist.” I learned that some participants weren’t likely to use the search function until they couldn’t find a topic by scrolling.
Our task language for testing our Markup prototypes was much better from the beginning. By leaving out any implementation language, we got a better idea of what users would be pre-disposed to do without any instructions. In our Markup testing, participants relied heavily on their understanding of the current version of Markup for how to perform actions in the prototype, sometimes to their detriment, so it was important to clarify and highlight areas where improvements to the interaction design were made.