Quality and Usability of Common Drug Information Databases: METHODS part 2

31 Dec
2010

Evaluation of Database Performance

Each study participant used each database to answer a set of 5 randomly assigned drug information questions (total of 15 questions per participant). The order in which study partic­ipants evaluated the 3 drug information databases was also randomly assigned, and over the whole study each database was used to attempt to answer each of the 15 questions at least once. In an effort to consume no more than 1 hour of a participant’s time, the participants were asked to spend no more than 3 minutes on each drug information question. Study participants used a 3-point scale to note whether answers to the drug information questions posed were present and complete (see Appendix 1), but the investigators made no attempts to confirm the accuracy of the answers located. The performance score for each database was calculated as the mean score across all 15 questions, with each question weighted equally.

Evaluation of Database Usability

After using all 3 drug information databases to answer the assigned drug information questions, study participants were asked to evaluate the usability of each product by completing a usability questionnaire (see Appendix 1), which was adapted from those previously published. Study participants used a 5-point Likert scale to rate each database within 7 different usability domains (database layout, navigation, speed, accuracy of content, amount of information, timeliness of information, and user satisfaction). The usability of each database was then calculated in terms of the mean score for each individual usability domain, as well as overall usability score across all 7 domains.
canadian pharmacy viagra

Users’ Preferences

After evaluating performance and usability, study participants were asked to rank the online drug information databases in order of preference on a scale of 1 to 3, where a rank of 1 represented the most preferred database and a rank of 3 represented the least preferred database. The distribution of rankings for each database was compared, and mean rank scores were calculated for each database. Subgroup analyses of the mean rank scores for each database, stratified by level of pharmacy training attained, years in pharmacy practice, and prior database access, were also performed. Finally, study participants were invited to comment on their experiences with the databases. The study investigators performed independent thematic analyses of these comments to supplement compar­isons of the quality, performance, and usability of the data­bases and users’ preferences.

Statistical Analysis

Descriptive statistics were used to calculate mean scores and 95% confidence intervals for database quality, perfor­mance, and usability and for users’ preferences. Inferential statistics were used to compare mean scores via analysis of variance (ANOVA) and 2-sample t tests. Values ofp below 0.05 were considered statistically significant.

top