“Think aloud” and “Near live” usability testing of two complex clinical decision support tools

Think Aloud and Near Live performed on two CDS tools, objective was to further understand the facilitators of usability and to evaluate the types of additional information gained from proceeding to "Near Live" after completing "Think Aloud".

  • Authors
  • Alexander O'Connell
  • David A. Feldstein
  • Devin M. Mann
  • Lauren McCullagh
  • Rachel Hess
  • Rebecca Mishuris
  • Safiya Richardson
  • Thomas McGinn
  • Published
  • International Journal of Medical Informatics



Low provider adoption continues to be a significant barrier to realizing the potential of clinical decision support. “Think Aloud” and “Near Live” usability testing were conducted on two clinical decision support tools. Each was composed of an alert, a clinical prediction rule which estimated risk of either group A Streptococcus pharyngitis or pneumonia and an automatic order set based on risk. The objective of this study was to further understanding of the facilitators of usability and to evaluate the types of additional information gained from proceeding to “Near Live” testing after completing “Think Aloud”.


This was a qualitative observational study conducted at a large academic health care system with 12 primary care providers. During “Think Aloud” testing, participants were provided with written clinical scenarios and asked to verbalize their thought process while interacting with the tool. During “Near Live” testing participants interacted with a mock patient. Morae usability software was used to record full screen capture and audio during every session. Participant comments were placed into coding categories and analyzed for generalizable themes. Themes were compared across usability methods.


“Think Aloud” and “Near Live” usability testing generated similar themes under the coding categories visibility, workflow, content, understand-ability and navigation. However, they generated significantly different themes under the coding categories usability, practical usefulness and medical usefulness. During both types of testing participants found the tool easier to use when important text was distinct in its appearance, alerts were passive and appropriately timed, content was up to date, language was clear and simple, and each component of the tool included obvious indicators of next steps. Participant comments reflected higher expectations for usability and usefulness during “Near Live” testing. For example, visit aids, such as automatically generated order sets, were felt to be less useful during “Near-Live” testing because they would not be all inclusive for the visit.


These complementary types of usability testing generated unique and generalizable insights. Feedback during “Think Aloud” testing primarily helped to improve the tools’ ease of use. The additional feedback from “Near Live” testing, which mimics a real clinical encounter, was helpful for eliciting key barriers and facilitators to provider workflow and adoption.

  • Keywords
  • Clinical Decision Support
  • Health Informatics
  • Provider Adoption
  • Usability
  • Usability Testing
  • user experience
  • Workflow