Adding Usability Research to your Qual Practice
by Kay Corry Aubrey, Usability Resources Inc.
Usability testing can be a very effective tool for finding and fixing problems with an interactive design. This article provides an introduction to usability testing for qualitative studies.
Most people’s everyday lives depend on interacting with technology, whether while making purchases, researching over the internet, checking email, calling someone over a smartphone or setting the oven temperature. How are marketers to know, though, whether the interactive technology is delivering their goods, services, information and messaging in a way that facilitates -- or instead creates barriers to -- their goals? Usability testing can be a very effective tool for finding and fixing problems with an interactive design.
This article provides an introduction to usability testing. My goal is to help familiarize readers with the process of usability testing and its related terminology and to enhance understanding of when to recommend this method to a client. I will describe how this method is similar to and different from other forms of market research, as well as when to weave it into an overall product-development strategy.
I will focus on qualitative usability studies. Qualitative usability tests are live and moderated, with participants performing typical tasks on their own with the product. These sessions are run by a moderator who keeps score of whether participants succeed or fail at key tasks and tracks the participants’ comments and body language.
There are many good reasons for running usability studies. The most obvious are to ensure that users can interact with a product as fluidly and intuitively as possible and to find usability glitches before customers do. Other reasons include the opportunity to see real people using the product before it ships and to compare design approaches. Usability testing aims to Start usability testing as soon as there is a physical rendition of the product. this rendition can be as simple as a paper prototype or foam-core mockup. In fact, marketers are better off spending as little time as possible on early mockups, to avoid becoming emotionally attached to any particular design. identify design flaws, while testing how clearly the product “speaks” to users, meets their expectations and fits into their typical work and task flow.
When Does a Usability Study Make Sense?
The design of something as seemingly insignificant as the touch-screen controls on a vacuum cleaner can represent hundreds of hours of team research, discussion and decision-making. Interactive design has become a mature discipline with standard approaches. Its process is iterative, starting with honing the product concept and progressing towards more detailed aspects of the design.
Usability testing is typically not done while the product idea is still being defined. At this stage, more traditional qualitative research approaches (i.e., focus groups, IDIs, online asynchronous bulletin boards and ethnographic studies) are more effective. These early-stage methods seek to ensure that the product idea makes sense and to determine the right features, functions, information and other content. Their goal is to distill many ideas down to the ones that will work.
The next set of hurdles in interactive design involves organizing and categorizing the product’s data, features and functions. These aspects of a product are referred to collectively as its “information architecture.” For a product to seem intuitive, the information architecture needs to align with the target user’s way of thinking. Determining how to organize a complex set of features or data so that it makes sense to target users is one of the most challenging aspects of interactive product design. Qualitative techniques such as ethnographic studies, task analysis, interviews and card sorts can provide direction on how to organize the product’s underlying information and functions. Usability tests, which are task based, are not effective at this stage since there is nothing yet to touch or to interact with.
Synchronizing the Testing Approach with the Product Development Stage
The more tangible aspects of the product design — such as product labels, the overall layout and graphics and how the user navigates from one screen to the next — are tangible enough to be usability tested. For instance, you can begin usability testing an online system for completing income tax once you have electronic or paper mockups that show where to enter income, deductions and taxes already paid, as well as navigational elements such as menus, links and buttons to take users through the process. Or, with a new kind of product packaging, you could usability test the initial user experience as soon as you have a physical mockup that exposes the new aspects of the package design.
It is important to find fundamental design flaws early because they are harder to fix later on. Therefore, start usability testing as soon as there is a physical rendition of the product. This rendition can be as simple as a paper prototype or foam-core mockup.
Do not worry about presenting a fully fleshed-out concept. In fact, marketers are better off spending as little time as possible on early mockups, to avoid becoming emotionally attached to any particular design. Early usability studies aim to generate feedback on lots of initial design approaches. The stimuli need only contain enough detail to elicit feedback on a specific aspect of the design (e.g., does this website navigation make sense? Do the graphics direct people’s attention to the right functions, or are they distracting?). The low-cost paper-prototype methods pay off handsomely because they allow you to develop the best design with minimal effort.
The next stage is to do usability testing against a functioning interactive prototype. For a website, this might be a simple set of HTML/CSS screenshots. The electronic prototype should give participants “the feel” of what it will be like to interact with the product. It should incorporate layout, branding and visual design, giving the participant a fairly coherent impression of what the final product will look like.
Major Steps in Running a Usability Study
The basic steps for running a usability study generally parallel those of a qualitative research project: plan the study, run it and then analyze and report on the findings. The chart below outlines the major activities at each stage of a usability study. The steps that are unique to running a usability study are in boldface type.
Where to Run a Usability Study: a Range of Choices
It is important to run studies “early and often,” conserving budget where you can. Running the studies in a facility makes sense when you have the budget and a large number of stakeholders who wish to observe the sessions. Low-cost options include live remote testing and running sessions in a conference room or in the natural setting where consumers would use the product. Each approach has its pros and cons (see chart below), but when money is the issue, choosing a low-cost venue is a reasonable option.
Live remote studies can be run using a telephone and standard web-conferencing software such as Glance or WebEx. Usability testing software such as Morae allows you to record both the screen and a webcam view of the participant. You are also able to transmit the live video image to observers over wireless or VPN.
Usability studies involve only a few participants (eight to ten is typical), so it is important to make sure you recruit carefully. Focus on just one or two demographic profiles. Narrow down the user profiles to those target audiences that will be most critical to your product’s success in the marketplace.
When recruiting for a usability study, choose confident and articulate participants with passion and expertise in the product category. Aim to recruit people with varying levels of experi- ence with the product. “Power users” (those who have experience with many products in a category) often provide the most useful feedback but they are not representative of typical users.
When recruiting participants for a usability study, define the required foundation skills you would expect a typical user of that product to have. For example, if you are testing an iPhone application, make sure your participants are fluent at using an iPhone; otherwise, you may end up testing the iPhone platform rather than your application. Your criteria might include “participant owns an iPhone and has used it every day for at least six months.”
Finally, when accepting participants into the study, the recruiter needs to ensure that new participants under- stand what it means to take part in a usability study. Being in a usability study is very different from taking part in a focus group or interview. Full disclosure of procedures is an absolute necessity. Participants need to know that they will be asked to carry out tasks independently on a new product and that the product team may observe the sessions. If eye-tracking or other biometric data will be collected during the study, the recruiter should mention this as well. The follow-up email to participants should repeat this infor- mation, so that they have an accurate picture of what to expect when they arrive at the session.
Collecting Usability Information
A usability session starts with a brief interview in which the participant describes his or her background as it relates to the product category. The participant then proceeds to the actual study, in which he or she attempts to complete a sequence of pre-defined tasks in key product areas (see below for tips). This is the core of the usability study. Once he has completed the tasks, the moderator asks a series of debriefing questions, allowing him to offer ideas and feedback. The task list is the main data-collection instrument in a usability study.
Tips on Writing a Task List
Use a brief scenario to provide context for the tasks. For instance, if you were running a study on an iPad web-conferencing application, you might describe this scenario:
“You run a family-owned business that has just deployed the Acme Web Conferencing product. Today, you are working from home but need to hold several meetings with your managers and employees. For the first time, you will be using your iPad to participate.”
Tasks should be short, easy to understand and focused on one operation or aspect of the product. The tasks should guide specific behavior, without explic- itly telling the participant what to do. Arrange tasks in a logical order to mimic a natural workflow, and make sure to “usability test” your task list ahead of time by running pilot sessions with friends and family.
Here’s an excerpt of how a task list from a study using the above scenario might look:
Task 1. Your administrative assistant, Kathleen, has set up an Acme Web conferencing meeting for you. Go to your email, and find the invitation.
Task 2. Join the conference.
Task 3. Enter your information: Chris Doe, doechris173@myemail. com, 987-123-9876.
Task 4. Enter the meeting room.
Task 5. Once you are in the meeting room, connect your phone to the web conference so that you can hear what is going on.
Task 6. You are expecting Lee, Carl and Kathleen. Have they arrived? What are they up to? Do you have all the information you need about them? Do you know who is hosting this meeting?
Task 7. Can you see yourself on the list?
Moderating a Usability Study
As in other forms of qualitative research, the moderator’s goal at the beginning of a session is to establish rapport by gaining the participant’s trust, letting him know what to expect and making sure he is comfortable in the testing environment so he can think clearly. The moderator might start with a little chitchat to establish rapport and put the participant at ease. Ask questions about his background as it relates to the product category. If the participant has not been in a study, the moderator should walk him through what he needs to do, answer any questions and then disclose recording devices and back room observers.
It is hard for participants to get away from the feeling that the usability study is really a test of their intelligence. (In reality, it often is, but consumers should not have to think too hard to use a product!) When speaking with the participant, the moderator should refer to the session as a “study” vs. a “test.” Usability moderators ease anxiety by saying something like, “We are testing the product, and not your intelligence. When you struggle, you are helping us by showing us where the design falls short. We measure success by the number of problems you can uncover for us. So, please, let us know when you are confused.”
As participants work through the task list, the moderator should observe their behavior and keep score of task success and failure, reasons why participants fail or succeed, comments and body language. To avoid any risk of influencing participants, the moderator should remain silent and only answer questions that do not give away clues on how the design works. When tasks are not being timed, ask participants to narrate their thought process aloud so you can understand the logic behind the choices they are making.
Once participants complete their task list, the moderator can answer any outstanding questions or confusion. The debrief discussion is an opportunity to probe deeper into each participant’s thought process and impressions, gather ideas for new features and administer surveys. You might also invite two or three members of the client team into the room so they can talk directly with participants and become part of the process.
Analysis and Reporting
Three main sources figure into the analysis: metrics (e.g., task success/ failure by participant and by task), participant comments and nonverbal behavior, and the client team’s response to the usability results. Moderators should continually assess the team’s responses by holding debriefing discussions at the end of each session or day of testing.
As in all qualitative research, it is important to understand how team members’ roles influence their interpretation of the results. The moderator needs to stay above the fray, remaining impartial and respecting others’ opinions while pointing to the objective results. The overall goal is to keep the important stakeholders on board so they can absorb and constructively act on the product’s usability issues. Keep ing the team together and invested in the process is far more important than resolving every usability issue that is uncovered.
The moderator should also meet with the team before writing the report to make sure everyone is in agreement on the results. The team should develop a list of usability issues, discuss how these issues manifested and agree on some possible solutions. Finally, the moderator and the team should prioritize issues to indicate when each will be addressed. The objective during this meeting is not to achieve a detailed redesign of the product but rather to help the team agree on and prioritize usability issues.
The report follows a typical qualitative research structure. To establish credibility, it should open with the study background, methodology and objectives, and it should provide details on the recruiting and task-list development. This opening section should be followed by a summary of key findings. The final section should contain details relevant to each issue: how and when the usability issue surfaced, possible solutions and the issue’s priority. All instruments and any other supporting detail should be put in an appendix.
Adding human interest will enhance the presentation’s persuasiveness. Consider taking pictures of participants (with their permission) and putting them in the report. Cite specific participants in verbatim comments and stories. As you describe usability issues, have the participants report the issues in their own words.
Present results visually. To make the report easy to skim, use informative headings, subheadings, bulleted lists, tables and other visual representations. The System Usability Scale (SUS) and Product Reaction Card surveys are widely used by usability professionals and produce informative visuals.
Even though your study might have a small sample of participants, attempt to tell the story with numbers (simple counts of key metrics such as how many people succeeded at particular tasks) in charts, graphs and tables.
You might also create a chart that matches how many tasks a participant got right against his or her SUS score. In-person live usability testing is qualitative research, but unlike other forms of qualitative research, you are watching the participant’s behavior, observing whether he can successfully complete each task. Another way to gather met- rics from a usability study is to include close-ended survey questions in your debrief discussion, which can help to quantify people’s reaction to key aspects of the product design. It is perfectly fine to include counts and descriptive statistics (mean, median, mode) in your report as long as you include a “Statement of Limitations” that reminds the reader of the value of qualitative research, while noting that the results are based on a small number of participants who may not be representative of the larger population. Inferential statistics are seldom used in the analy- sis of usability results because of the small number of participants. In most cases, it is impractical to run a study with a large sample.
Leveraging other Qualitative Skills with Usability Testing
Achieving mastery of any qualitative skill takes study, practice and the right mindset. If you think you might be inter- ested in adding usability to your skill set, a good first step is to invest in books that explain the field. A Practical Guide to Usability Testing by Joe Dumas and Ginny Reddish is a classic and contains detailed step-by-step instructions on every aspect of running a usability study. Other valuable primers on the usability field are Don’t Make Me Think by Steve Krug, Designing with the Mind in Mind by Jeff Johnson and Designing the Obvious by Robert Hoekman Jr.
Market researchers have unique skills in research design, branding, package design, concept testing, in using pro- jective techniques that help participants go deeper and wider than top of mind, and much more. Researchers who can deliver both objective (and deep) usability data along with relevant and accurate insights on participants’ feelings around product use are highly valued.
Market researchers who want to focus on usability might consider get- ting formal education in user-experience design, which blends usability with visual and interaction design. Before diving in, however, gain first-hand experience running studies to make sure this is something you really want to do. Usability work is much more detailed and often involves working with engineers who can have a very different worldview than market researchers. As a first step, you might incorporate a mini-study with a handful of tasks into an in-depth interview on a product. Remember to “test your test” by run- ning practice sessions with friends and family beforehand, and good luck!
Other content shared by Usability Resources Inc
by Kay Corry Aubrey, Usability Resources Inc.
Virtual teams are a good way for you to take on bigger projects, reach out into new markets and offer outstanding value to your clients. This article covers practical tips and best practices on how to create thriving virtual teams. Read Article »
by Kay Corry Aubrey, Usability Resources Inc.
An overview of usability testing, covering all steps of planning and executing your test, to analyzing and presenting results. Read Article »
by Kay Corry Aubrey, Usability Resources Inc.
Results from a usability study of Moxie Employee Spaces with 10 Community Managers. Read Article »
by Kay Corry Aubrey
There is a natural pairing between online communities and the role of the business communicator. Business communicators know how to effect change management strategies, create awareness, and develop successful governance, educational and communication policies. You can begin to develop online collaborative communities in your organization by following these seven steps. Read Article »