Want to create interactive content? It’s easy in Genially!

Get started free

Other Inspection Methods

Brandi Geister

Created on October 5, 2025

Start designing with a free template

Discover more than 1500 professional designs like these:

Terrazzo Presentation

Visual Presentation

Relaxing Presentation

Modern Presentation

Colorful Presentation

Modular Structure Presentation

Chromatic Presentation

Transcript

Other Inspection Methods

Testing Usability in User Experience

Overview for this lecture presentation:

Overview of Other Inspections

Steps & Examples

Two articles that do a deep dive, for more information

2 short videos

Formal Usability Testing

  • A set of task-based sessions with a sample of the "potential" or "actual" target users of the application
  • Process is used to uncover actual difficulties users have when interacting with a system
  • Users are observed while they perform tasks and user sessions are recorded
  • You can identify 85% of usability problems with just 5-10 users

Formal Usability Testing

  • The general approach is to let the synthetic flow of the user experience to happen or emerge, then:
    • observe it
    • Analyze it
    • Refer it back to the design elements
  • There are 3 main roles in a formal usability test
    • Users, Moderator, Observers

Users

  • Are not the object of the evaluation
  • They should not feel in any way evaluated or judged
  • Should be treated like partners that help you evaluate a system
You can say things like, "I'm not evaluating you, I'm evaluating the system. So your honest feedback is what I'm most interested in," to help users feel more comfortable.

Users

  • Are not the object of the evaluation
  • They should not feel in any way evaluated or judged
  • Should be treated like partners that help you evaluate a system
You can say things like, "I'm not evaluating you, I'm evaluating the system. So your honest feedback is what I'm most interested in," to help users feel more comfortable.

Moderators

  • Manage the test (logistics, dialogue, pace) and coordinate its succesful completion
  • Elicit and observe the session but do not encourage consensus or agreement
  • Remain neutral and refrain from personal opinions about the site, design, test or user
  • It's the user's opinion you're lookng for, not the moderator's!

User Recruiting

  • 5-10 users is a typical sample size
  • Avoid mixing first-time users and frequent users
  • Avoid participants unfamiliar with computers/ web
  • Set out to get comparable results across participants
Ask them all the same questions, complete the exact same test, create an identical testing environment

Data Capture

  • Quantitative Indicators
    • Effectiveness (task success rate)
    • Efficiency (time on task)
    • Errors (Wrong paths or actions)
    • Perceived task difficulty

Data Capture

  • Quantitative Indicators
    • Disorientation, stops, frustrations, waiting periods, wandering periods, satisfaction, engagement

Data Capture

  • Qualitative Indicators
    • Semi-Structured Interviews
      • "Overall, how would you characterize your experience with this site?"
      • "What was your favorite feature on this site?"
      • "What was your least favorite feature?"
    • Think-aloud protocol: Audio record user comments and review them later

Data Capture

  • Qualitative Indicators
    • Semi-Structured Interviews
      • "Overall, how would you characterize your experience with this site?"
      • "What was your favorite feature on this site?"
      • "What was your least favorite feature?"
    • Think-aloud protocol: Audio record user comments and review them later

Step One: Introduce and Explain the Purpose of the Session

  • Describe the purpose of the observation. Set the participant at ease by stressing that you're testing the website, not the participant
  • For example, you could say:
    • You're helping us by testing this website it its early stages
    • We're looking for places where the site may be difficult to use
    • If you have trouble with tasks, it's the website's fault, not yours
    • If you can locate the trouble spots, then we can improve the site.
    • Remember, we're testing the website, not you.

Step Two: Explain the tasks the participant will complete

  • If applicable, have each participant signed an informed consent.
  • Make sure you inform participants that they can quit at any time if they are uncomfortable.
  • Participants shouldn't feel like they're locked into completing tasks
  • Say something like this:
    • Although I don't know of any reason for this to happen-- If you should become uncomfortable or find this test objectionable in any way, you are free to quit at any time.

Step Three: Observe participants as they execute tasks

  • Ask participants to think aloud during the observation
  • By listening to participants think and plan, you can see their expectations for your site, as well as their intentions, explicit reactions, and problem sovling strategies
  • You could say
    • We get a great deal of information from these informal tests if we ask people to think aloud as they work through the exercises
    • All you do is speak your thoughts as you work
    • If you forget to think aloud, I'll remind you to keep talking

Step Four: Assign Tasks.

  • Introduce the website and describe the tasks
  • Explain what the participant should do and in what order
  • Give the participant written instructions for the tasks
    • Use formal printed papers or (clearly handwritten) "task cards"

Step Five: Ask for Questions

  • Ask if there are any questions before you start
  • Then begin the observation

Step Six: Observe Continued

  • Avoid "praising" the user
    • "How am I doing?" , "Good job! You're doing great!"
  • Do NOT Ask!!
    • "Do you like this feature"... Focus groups are for eliciting opions, usability tests are for eliciting behaviors
    • "Is this what you were expecting to be on this page?"
      • The typical answer would be, "IDK, I guess so...'

Step Seven: Track Tasks and Success

  • Task success indicates effectiveness of the interaction
  • Various levels of success
    • Complete Success: either with assistance or without
    • Partial Success: either with assistance or without
    • Failure: Participants thought it was complete, but it wasn't. OR Participant gave up

Giving Assistance

  • Carefully consider to WAIT some time before providing assistance (only when the situation is really problematic)
    • Moderator takes the participant back to a homepage or resets to an initial state
    • Moderator restates the task (the user forgot)
    • Moderator provides hunts
      • For example, "Why not try the 'plan your visit' tab?"

Score Methods

  • Method One
    • Complete success (without assistance) = 1.0
    • Partial success, or might need assistance = 0.5
    • Cannot complete task = 0.0
  • Method Two
    • 1 = No anticipated problems completing task
    • 2 = Minor problems completing task
    • 3 = Major problems completing task
    • 4 = Cannot complete task

Scoring Tasks: Stopping rules for 0 pts

  • Rule 1: Participants should work on a task until they either complete it wrong, give up or seek assistance
  • Rule 2: Three wrong paths or three attempts from the start could end the task session
  • Rule 3: Define cut- off time based on precise requirements
    • For example: Cutoff after 4 minutes

Time on Task

  • Defined as the time that goes from the moment the user...
    • Has finished reading the task
    • Has understood it (no more questions asked)
    • Directs his/her attention to the application
  • Until the moment one of the predefined stopping rules is met

Error Capture

  • Errors = incorrect actions that may lead to task failure
  • measure errors when...
    • An error will result in significant inefficiency
    • An error will result in significant costs
    • An error will result in task failure
  • An error might include...
    • Entering incorrect data into a form field
    • making the wrong choice in a menu or drop-down list

Error Capture

  • Error Capture
    • Error Opportunity = A chance to make a mistake
  • Example Task: Insert a picture into a word document
    • Error Opportunity 1: Click a wrong button
    • Error Opportunity 2: Insert a wrong picture
    • Error Opportunity 3: Insert the picrture into a wrong place

Collection Templates

  • Post-Task Questionnaires
  • Likert Scales
  • Time on Task
  • System Usability Scales

Error Capture

  • Error Capture
    • Error Opportunity = A chance to make a mistake
  • Example Task: Insert a picture into a word document
    • Error Opportunity 1: Click a wrong button
    • Error Opportunity 2: Insert a wrong picture
    • Error Opportunity 3: Insert the picrture into a wrong place

That's it for now!

Same time next week?

Send An Email!