Objective: The goal of this study was to measure the impact of the new 2011 Accreditation Council for Graduate Medical Education duty hour standards (DHS) on education, patient care, and overall satisfaction as perceived by pediatric hospitalist faculty.
Methods: We undertook a nonrandomized but controlled study of 23 pediatric hospitalist faculty members during a trial of the new DHS in 2011. During the intervention (January), residents piloted schedules that complied with the new DHS, and in the control period (February), resident schedules complied with previous DHS. Daily surveys solicited faculty perceptions of the amount and quality of teaching provided, time with patients, quality of patient care, and overall faculty satisfaction. Faculty were also surveyed on their years of experience as a hospitalist and clinical teaching activity. Multiple logistic regression analysis with generalized estimating equations was used to examine outcome associations after adjusting for census and accounting for multiple attending reports.
Results: Census volumes were higher in the control group. During the intervention, faculty were less likely to rate their quality of teaching (odds ratio [OR]: 0.40 [95% confidence interval (CI): 0.18–0.88]) and overall satisfaction with the shift (OR: 0.23 [95% CI: 0.08–0.66]) as good/excellent compared with controls. During the intervention, more years of experience as a hospitalist were associated with rating quality of patient care provided as good/excellent (OR: 1.77 [95% CI: 1.23–2.54]).
Conclusions: Faculty were less likely to rate their quality of teaching and overall satisfaction as good/excellent during a trial of the 2011 DHS. In addition, more experienced faculty were more likely to rate the quality of care highly.
In 2003, the Accreditation Council for Graduate Medical Education (ACGME) mandated the institution of duty hour standards (DHS) at residency programs across the United States.1 These initial standards were put into place in response to an increasingly complex health care system that required residents to care for sicker patients, growing public opinion that long work hours compromised patient safety and resident well-being, and increasing evidence of the negative effects sleep deprivation had on resident performance.2 After the implementation of the 2003 DHS, various studies were conducted to examine the effects of a decrease in work hours on residents’ educational experience and quality of care. A survey of key clinical faculty from 39 internal medicine residency programs found that the 2003 DHS resulted in a worsening of the physician–patient relationship, resident education, and opportunities for teaching.3 Furthermore, this study noted that faculty members with ≥5 years of teaching experience were more likely to perceive a negative effect of DHS on resident education. Additional studies of surgical and internal medicine faculty revealed that institution of the 2003 DHS had negative effects on their ability to teach, resident education, and amount of teaching on rounds.4,5
As with the 2003 DHS, the new ACGME DHS that went into effect in July 2011 have the potential to affect education and patient care.1 Studies published on the feasibility of implementation of the 2008 Institute of Medicine recommendations6 or the impact of the 2011 DHS note potential negative effects on resident education and patient care.7–12 Given the large percentage of clinical education that pediatric hospitalists provide for residents, they are in a prime position to monitor the effects of and develop a response to the new DHS.13
The aim of the current study was to determine the impact of the new 2011 ACGME DHS on education, patient care, and overall faculty satisfaction as perceived by pediatric hospitalist faculty during an early pilot of the DHS. A secondary aim was to determine if individual faculty characteristics, including experience and clinical teaching time (ie, time spent supervising residents in the clinical setting), affected the perception of education, patient care, and satisfaction.
This study was a nonrandomized but controlled prospective study of 23 pediatric hospitalist faculty members. The study was approved as exempt by the institutional review board of Cincinnati Children’s Hospital Medical Center (CCHMC).
Study Setting and Subjects
CCHMC is a large, freestanding pediatric teaching hospital with >180 residents and 4 general inpatient pediatric teams. Twenty-three faculty previously scheduled to attend on 1 of these inpatient teams during January and February 2011, including members of the Division of Hospital Medicine and chief residents, were invited to participate. Faculty were scheduled for daily shifts from 7 am to 4 pm during which they conducted rounds, staffed admissions, and provided teaching. These faculty hand off to 1 evening attending who remains onsite from 4 pm to midnight to staff new admissions as well as provide support and teaching to the residents.
Study Design and Intervention
The CCHMC pediatric residency program piloted the 2011 ACGME DHS in January 2011 on 4 general inpatient pediatric teams to determine how to feasibly structure shifts and educational activities. This study was conducted to assess faculty perceptions during the January pilot and compare these perceptions with those of faculty attending during the February control period when resident schedules reverted to a traditional schedule that complied with the 2003 DHS. A representation of the resident and faculty schedules during the 2-month period is depicted in Fig 1. January was selected for the pilot due to adequate resident staffing on the study teams to allow for compliance with the new DHS. February was selected as the control given its proximity to the intervention period and anticipation that census levels and patient acuity would be similar. Furthermore, given changes that occur in knowledge acquisition and experience over a year’s time, the proximity of the 2 months helped to minimize any differences in the residents.
Survey Creation and Collection
We designed the daily faculty tracking and faculty characteristic surveys specifically for this study. Both were reviewed by experts in education and research from the Division of Hospital Medicine for content and psychometric validity. The daily tracking surveys inquired about quality and amount of teaching, quality and amount of time spent with patients, quality of patient care provided by the resident team, and the attending’s overall satisfaction with the shift. Answer choices were based on a 5-point Likert scale (1 = very poor, 2 = poor, 3 = adequate, 4 = good, and 5 = excellent). Faculty also recorded the length of rounds and daily census numbers. Faculty completed a paper copy of the daily tracking survey each day on service. The faculty characteristic survey was distributed after completion of the study via an online platform14 and inquired about a faculty member’s years of experience and clinical teaching time. Faculty members were asked to record their names on the daily tracking surveys so that responses could be clustered according to individual and linked to the faculty characteristic survey results. However, the authors were blinded to individual responses after the analysis.
Likert scale outcome ratings from the daily faculty surveys were dichotomized by using scores of 4 or 5 (good/excellent) versus scores of 1, 2, or 3. Multivariate logistic regression with generalized estimating equations, to account for multiple reports from each faculty member and adjust for daily patient census, was used to compare faculty perception ratings between intervention and control periods. Similarly, adjusted logistic regression with generalized estimating equation models were used to assess the associations between faculty perceptions and faculty characteristics, including amount of clinical teaching time (<0.5 full-time equivalent [FTE] vs ≥0.5 FTE) and years in practice.
A total of 23 faculty members were included in the analysis. Sixteen members served as attending physicians in the intervention group, and 17 in the control group; 10 faculty completed service time in both the intervention and control groups. Higher census volumes were observed in the control group with an average daily census of 15 patients (SD, ± 4 patients) (Fig 2). During the intervention, the average daily census was 8 patients (SD, ± 3.5).
Daily Tracking Surveys
The response rate was 100% for the daily tracking surveys with 4 faculty members completing the surveys daily. There were 31 days of data collection during the intervention period and 28 days during the control period. Faculty were less likely to rate their quality of teaching (odds ratio [OR]: 0.40 [95% confidence interval (CI): 0.18–0.88]) and overall satisfaction with the shift (OR: 0.23 [95% CI: 0.08–0.66]) as good/excellent during the intervention period compared with the control period after adjusting for census numbers (Table 1). No statistically significant differences were noted between the control and intervention groups regarding faculty members’ ratings of the amount of teaching, time with patients, or quality of patient care provided by the team. Due to large census variations between the 2 groups, sensitivity analyses were conducted on a sample restricted to daily census numbers between 9 and 16 to provide sufficient overlap between the groups; similar results were found.
Faculty Characteristic Surveys
Faculty characteristic surveys were completed by 20 (87%) faculty members, and the results are presented in Table 2. Fifty percent of the participating faculty completed their residencies within 3 years of the study (2008–2010). During the intervention period, faculty members with more years of experience were more likely to rate the quality of care provided by their team as good/excellent (OR: 1.77 [95% CI: 1.23–2.54]). There were no other statistically significant associations between years of experience and clinical teaching time and a faculty member’s responses regarding quality and amount of teaching, quality and time for patient care, and overall satisfaction during the intervention.
Similar to earlier DHS, the 2011 DHS will evoke concerns regarding their impact on education and patient care. Daily survey data obtained during this study revealed that pediatric hospitalist faculty were more likely to rate their quality of teaching and overall satisfaction lower during the pilot intervention of the 2011 DHS. However, there was no significant difference in their ratings of the amount of teaching, time with patients, or quality of patient care provided by the team between the intervention and control groups. The lower rating of the quality of teaching may reflect the compression of time with learners, both during rounds and after, that no longer allowed faculty to teach via their traditional methods (eg, didactics, case-based discussion, bedside teaching) compared with the control group. However, faculty noted no difference in the time they had for education. The dissonance between these 2 similar qualifiers may not reflect the new 2011 DHS and time allotted for teaching but rather the manner in which shifts were structured during the intervention, which allowed less face-time with learners on rounds (ie, instead of 2 seniors and 4 interns on rounds, now only 1 senior and 2–3 interns were present).
The ratings of less overall satisfaction in the intervention group could be a reflection of less face-time with learners, reduced resident knowledge of patients, or fewer learners during morning rounds (a critical time for patient care and teaching). The lower satisfaction could also reflect the novelty of the new shift schedule arrangement that may lessen with time. If the pilot intervention would have lasted longer, it may have allowed faculty and residents additional time to refine their workflows and teaching practices, thus leading to improved satisfaction. For the other 3 inquiries, the limited differences between the groups may indicate a trivial impact of the 2011 DHS on education and patient care. However, as previously mentioned, this was a time-limited study and the long-term effects must be evaluated in future studies.
Data from this pilot intervention revealed that, at our institution, we have a large percentage of young faculty hospitalists who are only a few years out from their own residency training (n = 10 [50%]). These individuals trained under the 2003 DHS, and we presumed they would easily acclimate to their roles within the new DHS. However, results from our study revealed that faculty with more experience were more likely to rate the care provided by their team highly. This finding is contrary to previous literature published after the 2003 DHS that found more experienced faculty were less likely to view the changes favorably.3 Our findings would suggest that one’s perception of changes like this program are linked to experience and perhaps allow for greater adaptability.
These results should be considered in light of a few limitations. First, we examined 1 version of a 2011 DHS-compliant schedule at a single site with a limited number of faculty. Our results therefore may not be generalizable to other programs. The study took place over a limited period of time, thus measuring only the initial impact and not long-term effects. Lastly, the census numbers between the intervention and control groups differed substantially, which meant the census-adjusted models were extrapolations of a small sample of data in which census numbers overlapped between the 2 groups. Reassuringly, our sensitivity analysis, restricted to overlapping census numbers, showed results similar to the primary analysis.
Pediatric hospitalists provide a large proportion of the inpatient education for pediatric residents and are at a vantage point to observe the effects of the new 2011 DHS. During this pilot intervention at a large pediatric teaching hospital, faculty were less likely to rate their quality of teaching and overall satisfaction as good/excellent during a 2011 DHS model compared with control faculty working in a traditional 2003 DHS model. This study also noted that less experienced faculty were less likely to rate the quality of patient care provided by their teams during the pilot intervention as good/excellent compared with more experienced colleagues, signifying that one’s perception of changes in DHS is linked to experience. Otherwise, differences between the groups were insignificant, suggesting an otherwise limited impact of the 2011 DHS on education and patient care. These findings can be used by hospitalist groups to develop junior hospitalist faculty and should trigger additional studies that evaluate the long-term impact of the 2011 DHS.
This abstract was presented at the Pediatric Academic Societies Meeting; May 2012; Boston, MA; and at the Pediatric Hospital Medicine Conference; July 2012; Cincinnati, OH.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
FUNDING: No external funding.
- Accreditation Council for Graduate Medical Education
- Cincinnati Children’s Hospital Medical Center
- confidence interval
- duty hours standards
- full-time equivalent
- odds ratio
- Accreditation Council for Graduate Medical Education
- Ulmer CWD,
- Johns M
- Copyright © 2013 by the American Academy of Pediatrics