TABLE 2

I Examples of Rigorous and Appropriate Methods for Various Study Designs

Observational or cohort
 Are possible confounding factors discussed and controlled for?
 Are predictor and outcome variables defined and measured in an appropriate way?
Secondary dataset analysis
 Are possible confounding factors discussed or coded for?
 Are appropriate statistical analyses performed for the sampling strategy of the database?
 Is it clear how variables were chosen in a large national dataset (ie, not just a fishing expedition)?
Clinical trial
 Is the study randomized, controlled, and double-blinded?
 Are the intervention and control groups similar, or are differences controlled for in the analysis?
 Is the intervention appropriate and safe, and are outcomes clinically relevant?
 Was the method of analysis (ie, intention to treat versus per protocol) discussed, and was it appropriate?
Case–control
 Are controls selected in an appropriate manner?
 Is the case–control method used because the disease or problem is rare or delayed in appearance after the risk factor?
Systematic review
 Is the search for the relevant studies detailed and exhaustive?
 Are the studies found assessed for methodologic quality?
 Is publication bias assessed?
 Are sensitivity analyses performed?
Qualitative study
 Is the purpose inductive, related to understanding beliefs and points of view, and seeking in-depth understanding from the perspective of those who are experiencing the phenomenon directly?
 Is the philosophical framework stated (eg, grounded theory, phenomenology) or at least implied?
 Is purposeful or theoretical sampling used and described, if appropriate?
 Are data collection methods appropriate for the research objectives?
 Is sampling done until theoretical saturation or informational redundancy reached?
 Is the transformation of data to codes or themes clearly described, and is the process iterative?
 Is trustworthiness of data and key findings ensured through well-described strategies (eg, investigator triangulation, member check-in, theory triangulation, provision of an audit trail, peer debriefing)?
 Do the themes make sense and seem reductive rather than just a repetition of quotes from subjects?
Study of diagnostic test
 Is a blind comparison with an independent gold standard performed?
 Are likelihood ratios or sensitivity, specificity, and predictive values used and interpreted appropriately?
Education study
 Does the study or curriculum development project address the criteria for scholarship (eg, clear goals, adequate preparation, appropriate methods, significant results, effective presentation, and reflective critique)?
 Is validity evidence presented for evaluation instruments?
 Does the project address important outcomes (ie, outcomes go beyond learner satisfaction to look at knowledge or skill gained, change in behavior or practice, or effects on patient care, organizations, or systems)?