Pitfalls in Comparative Simulation Based Research

Posted by Nikita Joshi, MD on

Simulation based research is tough. We all know that it’s fun, and we think it is safer for the patients. Beyond that, the data is not so strong! And it’s important to care about this. Why? Because those working in the educational fields are fighting for budgets against other strong modalities for education such as ultrasound. Additionally, we are fighting for valuable time and space with the learners, such as medical students and residents. Improving simulation research can give credibility to those educators who seek more money, time, and focus for their learners. But how to improve it?

Lineberry et al reviewed this issue in the August 2013 issue of Simulation Healthcare in a paper titled Comparative Research on Training Simulators in Emergency Medicine 1 . The goal was to review studies that compared different simulation modalities. Through this, they were able to extrapolate important areas of improvement in methodology that can provide guidance for future simulation research articles.

Methodology

  • Literature review search strategy
  • Inclusion criteria: Simulation studies focusing on procedural skills within EM such as needle access procedures, airway management, tube thoracostomy, FAST exam, trauma evaluation.
  • Exclusion criteria: Studies on advanced surgical procedures, interpersonal skills, satisfaction or confidence (based upon the concept that this does not correlate well with learning or behavior)
  • 20 studies total were found

Challenges in comparative simulation based research

  • Small Sample Sizes: Most studies do not include sufficient number of participants, therefore the studies are underpowered.
  • Reliability of Performance Ratings: Raters are often judgmental and may not be consistent from one rater to another. This confounds the data and makes it so the ratings unreliable and inconsistent.
  • Weighted vs Unweighted Procedural Checklists and Ratings Scales: Not all steps in a procedure are of equal importance, but checklists rarely take this into account. The authors assume that high-criticality steps are more closely related to patient outcome – hence morbidity and mortality. Therefore, those steps that are most critical should be weighted more heavily, such as the proper sequence of incisions made during a cricothyroidotomy are of extreme importance.
  • Type 1 and Type 2 Error Rates: Research traditionally looks for Type 1 error (study incorrectly states one modality is more effective than the other when in actuality both are equal). The authors argue that this statistical argument is not as relevant in simulation research, nor as serious of a defect in the methods because other factors could be relevant such as cost or time commitment. For example, if both simulation modalities are equally effective, then maybe the cheaper one that requires less faculty time might be the better one, therefore more effective. The authors’ point is that Type 1 and Type 2 errors are not as relevant in this education research as compared to clinical research.

Conclusion

The authors argue that these areas must be addressed in order to contribute simulation based research that is meaningful. Primarily the next generation of studies need to address:

  1. Small sample size
  2. Reliability of rating
  3. Deeper understanding of significant of type 1 and 2 error

There must be a rationale for comparing one modality of simulation with another. Finally, the modality chosen must also have clinical relevance and ultimately link education to patient outcome.

1.
Lineberry M, Walwanis M, Reni J. Comparative research on training simulators in emergency medicine: a methodological review. Simul Healthc. 2013;8(4):253-261. [PubMed]

Author information

Nikita Joshi, MD

ALiEM Chief People Officer and Associate Editor
Clinical Instructor
Department of Emergency Medicine
Stanford University

The post Pitfalls in Comparative Simulation Based Research appeared first on ALiEM.


Go to full site