hosppeds
February 2015, VOLUME5 /ISSUE 2

Evaluating Simulation Education Via Electronic Surveys Immediately Following Live Critical Events: A Pilot Study

  1. Corinne Savides Happel, MD1,
  2. Meredith A. Lease, MD2,
  3. Akira Nishisaki, MD, MSCE3 and
  4. Matthew S. Braga, MD4
  1. 1National Institute of Allergy and Infectious Diseases, National Institutes of Health, Bethesda, Maryland;
  2. 2American Family Children’s Hospital, University of Wisconsin, Madison, Wisconsin;
  3. 3Anesthesiology and Critical Care Medicine, The Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania; and
  4. 4Children’s Hospital at Dartmouth, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
  1. Address correspondence to Corinne Happel, MD and Matt Braga, MD, Children’s Hospital at Dartmouth, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756. E-mails: corinne.happel{at}gmail.com and matthew.s.braga{at}hitchcock.org

Abstract

Background and Objectives: Simulation-based medical education has become popular in postgraduate training for medical emergencies; however, the direct impact on learners’ clinical performances during live critical events is unknown. Our goal was to evaluate the perceived impact of simulation-based education on pediatric emergencies by auditing pediatric residents immediately after involvement in actual emergency clinical events.

Methods: Weekly team-based pediatric simulation training for inpatient emergencies was implemented in an academic tertiary care hospital. Immediately after actual pediatric emergency events, each resident involved was audited regarding roles, performed tasks, and perceived effectiveness of earlier simulation-based education. The audit was performed by using a Likert scale.

Results: From September 2010 through August 2011, a total of 49 simulation sessions were held. During the same period, 27 pediatric emergency events occurred: 3 code events, 14 rapid response team activations, and 10 emergency transfers to the PICU. Forty-seven survey responses from 20 pediatric residents were obtained after the emergency clinical events. Fifty-three percent of residents felt well prepared, and 45% reported having experienced a similar simulation before the clinical event. A preceding similar simulation experience was perceived as helpful in improving clinical performance. Residents’ confidence levels, however, did not differ significantly between those who reported having had a preceding similar simulation and those who had not (median of 4 vs median of 3; P = .16, Wilcoxon rank-sum test).

Conclusions: A novel electronic survey was successfully piloted to measure residents’ perceptions of simulation education compared with live critical events. Residents perceived that their experiences in earlier similar simulations positively affected their performances during emergencies.

  • graduate medical education
  • code blue
  • patient simulation
  • pediatrics

Pediatric in-hospital emergencies are not rare events1; nevertheless, they are not sufficiently frequent for resident trainees to gain and maintain skill competence. Simulation-based education enhances resident learning in critical skills.2 For this reason, pediatric residency programs increasingly use simulation-based education to prepare residents for critical events.3 Simulation training for pediatric trainees has been proven effective to improve technical and teamwork skills in a training environment.48 Limited studies have attempted to evaluate the clinical impact of simulation-based education. One study correlated improved pediatric cardiopulmonary arrest survival rates with introduction of simulation training in a hospital.9 However, the perceived impact of simulation-based education on learners’ actual clinical performances during emergencies has not been measured.

To address this question, we designed a pilot study to evaluate the perceived clinical impact of simulation-based education. We specifically attempted to link learners’ perceptions of training with their clinical experiences during pediatric emergencies.

METHODS

With local institutional review board approval, this pilot study was conducted from September 2010 through August 2011 at the Children’s Hospital at Dartmouth (CHaD), a pediatric tertiary teaching hospital located in Lebanon, New Hampshire. The consent from each learner was given through voluntary participation. CHaD is part of Dartmouth-Hitchcock Medical Center, a large academic center with 396 total beds; pediatric residents are responsible for the pediatric inpatient unit comprising 20 beds, the PICU comprising 10 beds, and the neonatology unit comprising 30 beds. The hospital has a pediatric residency program accredited by the Accreditation Council for Graduate Medical Education; the program has 7 residents per year.

At CHaD, simulation-based education has been integrated into pediatric residency training over the last 5 years. A mock code program was designed that calls for repeated resident participation to minimize the decline of knowledge and skills gained from simulation-based education over time.10 The program was overseen by the senior author (M.S.B.), a critical care physician with 6 years of clinical and simulation training and debriefing experience. In this program, pediatric residents were scheduled to participate in weekly mock codes during general pediatric inpatient unit and PICU rotations throughout their 3 years of training; on average, residents spent 4 months per year in these rotations. Mock codes were held in a high-fidelity simulation laboratory, and they occurred weekly at the same known time. Each mock code consisted of ∼30 minutes of a clinical scenario. Pediatric residents and rotating anesthesiology residents participated. Sessions were videotaped and replayed for participants at the debriefer’s discretion.

The training scenarios were chosen by pediatric critical care faculty based on the team members’ earlier experiences with simulation as well as any recently identified gaps in clinical exposure. A total of 49 different scenarios were used, including respiratory arrest, asystole, ventricular fibrillation, supraventricular tachycardia, septic shock, status epilepticus and asthmaticus, increased intracranial pressure, and toxin exposure. Debriefing (led by an experienced pediatric critical care clinician as a content expert and an experienced debriefer as a simulation education expert) was performed immediately after each scenario and lasted 15 to 30 minutes. The content of the debriefing covered both medical and crisis resource management topics; the methods of debriefing followed either an advocacy-inquiry or a plus/delta format.11,12 Residents also participated in separate simulation sessions focused on technical skills, including tracheal intubation, central line placement, intravenous cannulation, and intraosseous needle placement.

At CHaD, the pediatric code team consists of 2 pediatric residents on the general ward service, 1 pediatric resident on the PICU service, an on-call anesthesiologist, an on-call respiratory therapist, and an on-call PICU nurse. The rapid response team is composed of the same personnel but minus the anesthesiologist. The on-call PICU attending and relevant consulting services are notified of critical events and respond as appropriate; there are no overnight in-house PICU attending physicians, and the program has no PICU fellows.

The research team implemented a notification system with a hospital-wide life support program. The research team was notified via an automated e-mail system within 24 hours after code blues, rapid response team activations, and urgent transfers from the regular pediatric unit to the PICU. Once notified, the research team reviewed code documentation and medical records and sent an electronic 12-question survey to all residents who participated in the event (Supplemental Information). The survey took ∼10 minutes to complete. Respondents were asked to describe the nature of the clinical event, their roles in the event, and their levels of perceived preparedness. Responders were also asked how previous simulation compared with the event and whether they felt the simulation had improved their performance. Two free-form questions allowed residents to describe any aspect of the critical event that was different than anticipated and to list any knowledge or skill they wish they would have had before the clinical event. This questionnaire was developed in a manner similar to that used by Hunt et al.1 It was reviewed and refined by 5 graduating residents; the pediatric simulation group at CHaD then further revised the survey for content validity.

There was no sample size calculation because this research was planned as a pilot study for 1 year. Our unit of analysis was by response, not by resident, because training status and skill sets change over time. Therefore, residents were asked to respond after each distinct critical event in which they were involved.

Items on the questionnaire regarding perception used a 5-point Likert scale (1, not at all; 2, little/few; 3, some; 4, well; and 5, very well). Those data are summarized with median and interquartile ranges (IQRs) because the sample size was small and the data were not normally distributed. To assess the impact of previous simulation training on resident perception on preparedness, the Wilcoxon rank-sum test was used due to the nonparametric nature of responses.

RESULTS

Twenty-seven clinical events were identified: 3 (11%) code blues, 14 (52%) rapid response team activations, and 10 (37%) emergency transfers to the PICU. The median age of the patient was 8 years (IQR: 2–15). Respiratory failure was the most common cause for the event (44%), followed by cardiac difficulty (33%). The median number of residents responding to the event was 3 (IQR: 2–3).

A total of 74 surveys were sent, with a 64% response rate (47 surveys were completed [postgraduate year 1: 11; postgraduate year 2: 18; and postgraduate year 3: 18]). Twenty-one (45%) of 47 surveys indicated that residents perceived that they had participated in a previous simulation session that was similar to the critical event.

Residents self-identified roles they had filled in the critical events. Fourteen (30%) of 47 residents served as team leader. Residents often reported talking with patient family members (49%, n = 23), calling consultants (38%, n = 18), and reviewing patient medical records (38%, n = 18). A small number of survey responses indicated that the corresponding resident performed a procedure during the described event: 5 (11%) reported performing chest compressions, 5 (11%) reported performing airway repositioning, and 1 reported (2%) performing tracheal intubation. The majority (74%, n = 35) of surveys indicated that the responding resident had not performed any procedure during the event.

In terms of preparedness, most residents indicated that they felt well prepared for the critical event (median: 4 [well prepared]; IQR: 3–4)(Fig 1). The residents who perceived that they had previously participated in a similar simulation reported that the scenarios, simulation environment, and mannequins were realistic compared with actual clinical events (Table 1). These residents reported that the earlier simulation event positively affected their behaviors during the clinical event (Table 2). Residents’ confidence levels, however, did not significantly differ between those who reported having had an earlier similar simulation and those who had not (median of 4 [IQR: 3–4] vs median of 3 [IQR: 3–4]; P = .16, Wilcoxon rank-sum test).

FIGURE 1

Sense of preparedness by residents who had experienced a similar earlier simulation versus those who had not. Forty-seven surveys were completed by 20 residents that described a total of 23 critical events. Twenty-one of the 47 surveys indicated that the participating resident had completed a similar simulation.

TABLE 1

Responses to the Following Questions Comparing Previous Simulation Experience With the Current Critical Event

TABLE 2

Self-Evaluation of How Previously Completed Simulation Experiences Affected Performance During the Current Critical Event

DISCUSSION

Simulation-based education is becoming popular in postgraduate medical training. With the Accreditation Council for Graduate Medical Education’s transition toward competency-based education, simulation will likely be formalized as a training and evaluation modality. The goal of the present pilot study was to evaluate the perceived impact of earlier similar simulations on learners’ preparedness. The capturing of residents’ perceptions of simulation directly after a critical event advances previous studies that have evaluated resident perceptions of simulation in isolation from clinical encounters. Live critical events seem opportune periods to identify perceived gaps in previous educational experiences.

Participating residents reported that preceding similar simulations positively affected their performances during clinical events, including specifically knowing when to call for help, understanding medical management, and feeling confident to contribute knowledge (Table 2). The majority of residents reported confidence in the management of critical events regardless of whether they reported having experienced an earlier similar simulation. There are several possible explanations for this finding. First, insignificant confidence level differences between groups may be due to the small sample size inherent to this pilot study. Second, residents likely would have participated in ≥1 simulation even if they did not perceive that any were similar to the critical event, and during any simulation they would have been taught key behavior skills (including teamwork, role clarity, and communication skills) that could improve their preparedness. Finally, we acknowledge that self-confidence levels do not infer competence.13 Novice learners often overestimate their skill levels, and residents who had participated in a similar simulation may have been more critical of their own preparedness.

From the perspective of pediatric residency training programs, the evaluation of simulation-based education should move from measuring trainee perceptions toward measuring behaviors in clinical encounters, as described in the Kirkpatrick evaluation model.14 From the perspectives of the hospitals and the patients, evaluating simulation-based education should move from measuring behavioral improvements in laboratory/simulation settings to measuring improved patient care practices and patient outcomes.15 In the present study, we explored the possibility of evaluating the perceived impact of simulation-based education on pediatric emergencies by auditing pediatric residents immediately after involvement in actual critical events. Although this level of analysis ranks low in the Kirkpatrick evaluation model (level 1), our assessment was paired with discrete clinical events, an action that has not, to our knowledge, previously been performed. In the future, direct observation and video recording can be used in simulations and critical events to more objectively measure the impact of simulation education. Direct observation and video recording are becoming common in trauma and neonatal resuscitation,1620 although to our knowledge no one has paired observations of residents in simulations with those of residents in live events. Such a comparison may link specific simulation components with the improved patient outcomes9,21 that previously have been correlated with initiation of simulation programs.

Finally, it is noteworthy that residents performed relatively few procedures during critical events. Although it was outside the scope of this project to assess details of procedural experience, this finding supports a growing concern regarding the emergency procedural competency of graduating pediatric residents.22

The present study has several limitations inherent in a survey design. This research was a pilot study with a limited time frame, and any statistical inference is therefore likely underpowered. It is also important to emphasize that the measured effectiveness of simulation training relied on self-perception, which falls at the lowest level in the Kirkpatrick evaluation model.14 Our assessment of simulation training effectiveness was performed after clinical events, which could have introduced recall bias and might have positively or negatively affected the perceived simulation training effectiveness. We also had a lower than ideal response rate to our questionnaire. This outcome might have resulted in positively skewed results (ie, more engaged resident trainees might have responded selectively). Finally, as already discussed, it is a well-documented phenomenon that trainees report a false sense of confidence when skill levels are low (ie, the Dunning-Kruger effect).23

CONCLUSIONS

The novel technique of auditing residents after live critical events regarding perceptions of previous simulations was successfully piloted. Residents reported that their experiences in preceding similar simulations positively affected their performances during actual clinical events.

Acknowledgments

The authors wish to thank Marcy Singleton, APRN, Derek Callaway, PA, Heather Robertson, MD, and Ashley Sens, MD, for their leadership in simulation training sessions.

Footnotes

  • FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.

  • FUNDING: This study was funded by Helen’s Fund at the Geisel School of Medicine. Corinne Happel was at Dartmouth-Hithcock Medical Center during the time of this study.

  • POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.

CHaD
Children’s Hospital at Dartmouth
IQR
interquartile range

REFERENCES

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.