OBJECTIVES: Evaluative assessment is needed to inform improvement of Part 4 Maintenance of Certification (MOC), a large-scale program that aims to improve physician knowledge, engagement, and skills in quality improvement (QI). We sought to determine if Part 4 MOC participation improves perceived educational and clinical outcomes by piloting a new physician survey.
METHODS: We administered a new online survey (MOC Practice, Engagement, Attitude, and Knowledge Survey) to physicians at the beginning and end of a Part 4 MOC project sponsored by a pediatric hospital’s American Board of Medical Specialties’ portfolio program during 2015. Participants worked in academic and community settings and in various accredited specialties. The main outcome was change in survey response on a 5-point Likert scale (1 = best) for 3 learning domains (QI engagement and attitude; QI method application, and improved patient care).
RESULTS: Of 123 complete responses and a 97% response rate, mean baseline responses were positive or neutral (2.2, 2.3, 1.9, respectively). Responses improved in QI engagement and attitude (−0.15, z score = −2.78, P = .005), QI method application (−0.39, z score = −7.364, P < .005), and improved patient care (−0.11, z score = −1.728, P = .084).
CONCLUSIONS: A Part 4 MOC physician survey provides valuable data to evaluate and improve the learning activity. In this children’s hospital program, physicians view Part 4 favorably. Participation was associated with modest improvements in perceptions of QI engagement and attitude, application of QI methods, and patient care. Systematic evaluation of all Part 4 MOC projects and programs has the potential to improve the program nationally.
Compelled by poor patient outcomes and unsustainable health care costs, US physicians are facing a cadre of initiatives with the goal of delivering better care.1 One of the most wide-sweeping and contentious of these is the American Board of Medical Specialties (ABMS) Maintenance of Certification (MOC) process.2
The ABMS MOC requirements are intended to assist the 24 member boards of the ABMS in certifying to the public that physicians have the requisite knowledge, competence, and skills expected of a certified specialist. Among these standards, the Part 4 MOC requires physicians to participate in practice improvement efforts that demonstrate their ability to assess their own clinical practice and then, ultimately, implement appropriate quality improvement (QI) activities to better patient outcomes. This process often involves explicit practice measurement activity and the comparison of performance with peers and to national benchmarks.
Although many physicians embrace the concept of MOC, the Part 4 MOC exercise still draws strong criticism.3–6 Nearly every physician in the country will need to spend considerable time and resources participating in these QI activities to maintain board certification through the MOC process.7 Critics commonly assert that many of the projects are poor learning experiences, are not relevant to their practices, do not improve outcomes for their patients, and are too expensive, time consuming, and administratively focused.8,9 To improve the Part 4 experience and better meet the needs of the physicians and patients, systematic evaluation by using a validated survey instrument to guide improvement efforts is needed.
Therefore, we systematically designed a pre- and post-participation Part 4 MOC survey to assess perceived educational and clinical outcomes and then piloted the survey in a large ABMS Multi-Specialty Portfolio Approval Program. We primarily aimed to measure changes in physicians’ perception of (1) engagement and attitudes about QI and MOC, (2) their ability to apply QI methods, and (3) the impact of their Part 4 MOC activity on patient care. Because there may be an opportunity to tailor MOC experiences to the learner, we secondarily sought to determine if physician characteristics are associated with learning.
The Seattle Children’s Hospital (SCH) MOC Program began in 2012 and enrolls ∼100 to 160 community and academic physicians per year from the Pacific Northwest in Part 4 MOC projects. During the study period, the SCH Program managed 13 ABMS-accredited projects with the aim of improving care for patients in multiple settings, including ambulatory clinics, academic and community hospitals, private practices, communities, and newborn nurseries. Participation was limited to projects that are relevant to the physician’s practice setting.
We surveyed all physicians who participated in the SCH MOC Program in 2015. Participants were e-mailed a link to an anonymous baseline survey administered via an on-line data capture system (REDCap) on project enrollment.10 Upon completion of the MOC project and as part of the attestation process, participants were asked to complete a similar anonymous follow-up survey. E-mail reminders were sent to nonresponders to improve the response rate. All but 5 participants completed the survey within a week of attestation. Participants enrolled in multiple projects during the study period were included. Project leaders were excluded because their learning experience was expected to differ from that of participants. Participants were excluded if they enrolled but did not complete both the pre- and the post-project survey or complete the MOC project.
We conducted an extensive review of the literature and did not find any instruments to measure the educational impact of Part 4 MOC. Therefore, MOC and educational assessment experts (J.S.T., J.D.C., J.B., and M.A.B.) in a stepwise fashion developed the survey, called the “MOC Practice, Engagement, Attitude, and Knowledge Survey” (MOC-PEAKS). First, the initial questions were developed from review of the literature, participant feedback, and programmatic goals.11 Next, themes for perceived benefits, limitations, and barriers were identified by using structured interviews of physician participants from previous projects preceding the current study. Then, we developed a pilot Web-based survey using the identified themes. Content experts reviewed the survey items for clarity, relevance, and cognitive difficulty. Modifications to the survey were then made, and the survey was fully launched in January of 2015 for all of the participants in the SCH MOC Program. The survey uses a Likert scale (1 = strongly agree, 2 = agree, 3 = neutral, 4 = disagree, and 5 = strongly disagree) (Table 1). Two items from the 15-item survey were excluded from the analysis because the questions were not relevant to this research.
During initial survey development, the reliability of the survey was evaluated by using Cronbach’s α, and subsequent changes to reliability were analyzed if items were deleted. The potential domains of relationships between items were further developed by principle components factor analysis with varimax rotation. Physicians’ perceptions were organized into 3 educational domains: (1) engagement and attitude of QI and MOC; (2) application of QI methods; and (3) perceived impact of MOC activity on patient care.
To be enrolled in the study, each participant must have met the minimum participation criteria, which include active participation in at least 3 team meetings in which project aims and interventions were evaluated and barriers were identified and mitigated by using analysis of data over time. All participants must have completed QI training, either locally or through distance learning (eg, through the Institute for Healthcare Improvement Open School). For some projects, participants must have met additional project-specific activity criteria (eg, chart review, self-reflection exercises, or peer observations).
Differences between the questions on the first survey before or on beginning the MOC project participation (pre-participation survey) and the questions on the survey completed after participation (post-participation survey) were compared, and responses were averaged to calculate a mean domain score for the 3 domains of interest, as well as average differences in each domain. If a participant completed more than 1 project during the study period, the first pre- and post-MOC surveys were compared, because inclusion of second surveys in a subanalysis did not alter the results. Seven participants left at least 1 survey question blank, so domain averages and average differences for those surveys were calculated for the remaining questions. Questions and domains between pre- and post-participation surveys were compared by using Wilcoxon rank tests. Diverging stacked bar charts were used to display the proportion of respondents who neither agreed nor disagreed from those who had a positive or negative response. Comparisons between previous QI training and participation and between participation and residency year completion categories (≤1996, 1997–2005, 2006–2009, 2010–2015) were made by using Mann–Whitney U tests and Kruskal-Wallis tests, respectively.
The Internal Review Board of SCH approved the project. All analyses were conducted by using SPSS version 18 (PASW Statistics for Windows; SPSS Inc, Chicago, IL).
The response rate of 123 individuals who completed the project was 97%. These respondents made up the study population. There were 15 participants enrolled in multiple projects (13 in 2 projects and 2 in 3 projects) during the study period. Survey inclusion of the participants that did not complete the project or all surveys (n = 33) in a subanalysis did not change the results.
Participants completed residency between 1974 and 2014, and the mean year of completion was 2002. The majority of participants worked in hospital-based practice (96.1%), and 30% previously participated in MOC projects in the SCH MOC Program. Additionally 55.5% reported previous participation in some other QI training or education. Most participants were diplomats of the American Board of Pediatrics (111), followed by the American Board of Psychiatry and Neurology (11), the American Board of Emergency Medicine (1), the American Board of Surgery (1), the American Board of Internal Medicine (1), the American Board of Allergy and Immunology (1), and the American Board of Preventive Medicine (1).
Survey Reliability and Factor Analysis
The survey scale demonstrated excellent reliability, and the factors were reasonable indicators of the content characteristics. The overall reliability of the 13-item scale was a Cronbach’s α of 0.88. The characteristics of the scale were not improved by deletion of any individual items. Principal components factor analyses identified 2 factors accounting for a total of 64% of the variability of the scale. The first factor included all items of engagement, impact, and the first 3 items of application. The second factor included 5 items that indicated the perceived ability of the participant to describe, use, interpret, create, and apply specific QI methods.
Changes in Items and Domains After MOC Participation
The mean pre-participation responses were mostly positive or neutral, ranging from 1.8 to 3.1 on the 5-point Likert scale (Table 1). Figure 1 displays the Likert scale changes after MOC participation in each of the educational domains, and all 3 demonstrate an improvement. Comparing the pre-participation and first post-participation survey responses across all domains, participants reported an improvement in post-participation responses, and the analyses indicate statistically significant differences in the engagement and application domains (z score = −2.780, P = .005; z score = −7.364, P < .005), but not in the impact domain (z score = −1.728, P = .084), and in 9 of the 13 individual items that make up the domains (Table 1). Comparing domains, the application of QI methods showed the largest mean change, with an average difference of 0.39 on the Likert scale (z score = −7.225, P < .005). Comparing individual items, questions regarding the ability to interpret a run chart and/or statistical process control chart and the ability to create a run chart and/or statistical process control chart had the highest mean change, with an average difference of 0.73 and 0.72 on the Likert scale, respectively (z score = −6.714, P < .005; z score = −6.785, P < .005). There was a significant change in the question within the patient care domain assessing if the participant felt that the MOC project would lead to lasting improvement in the care of his or her own patients (z score = −2.407, P = .016).
Comparing residency year completion quartiles (before 1997, 1997–2005, 2006–2009, and 2010–2014) with post-participation survey responses, no differences were found between residency year categories and domain means (Table 1). Individual question comparisons only revealed statistically significant differences in response to the question asking whether participants planned to apply QI methods to their job and whether participants are able to apply QI methods (χ2 = 7.929, P = .047; χ2 = 9.352, P = .025). In post hoc analyses adjusting for multiple comparisons, individuals completing residency in 1997 to 2005 exhibited more of a post-participation change toward a higher number of agree responses than the 2010–2015 group (P < .05).
Participants who reported no previous QI training demonstrated a statistically significant change toward agree responses in the application domain compared with those who reported previous participation in QI education (z score =−3.397, P = .001) (Table 2). Differences in the engagement and impact domains were not statistically significant (P = .701, P = .993, respectively), and there were no significant differences among any individual questions in those domains (P > .05).
The survey responses in all 3 domains did not differ significantly for participants that worked primarily in a hospital versus an outpatient practice setting (P > .05).
We systematically developed and applied a survey to examine the value of physician participation in the Part 4 MOC by considering its impact in 3 learning domains. Contrary to data and publicity suggesting widespread concerns about Part 4 MOC, most physicians surveyed in this program reported positive or neutral baseline responses regarding the perceived value of MOC.3–6 Moreover, with our results, we demonstrate that the Part 4 MOC intervention is meeting some of its major educational goals: physicians reported significant improvement in their engagement and attitude about QI and MOC, their perceived ability to apply QI methods after participating, and the perception that their MOC activity would lead to lasting improvement in patient care. Thus, the Part 4 MOC exercise does, at least under certain circumstances, accomplish the goal of engaging large groups of physicians in meaningful practice-based QI participation and learning.
As with any large-scale educational and QI intervention, objective data are needed to critically evaluate and improve the Part 4 MOC. Recently, in a focus group of internal and family medicine physicians at the Mayo Clinic, Cook et al3 found misalignments between the purpose and the current state of MOC. To address this misalignment, the authors of this study recommend a model that “invites increased support from organizations, effectiveness, and relevance of learning activities, value to physicians, integration with clinical practice, and coherence across MOC tasks.” With these goals in mind, we developed and implemented the MOC-PEAKS. Findings from our initial survey period provided valuable data that helped inform program development and align varied interests. We used that constructive data to improve the design and implementation of projects for diverse practices affiliated with a children’s hospital across the Pacific Northwest. We will continue to use this survey to monitor the learning effectiveness of projects and our program, because the learning needs of physicians will likely change over time. Similarly, the value of MOC nationally could be improved, and the widely publicized concerns could be addressed, if all Part 4 MOC programs systematically evaluated their learning effectiveness and used those data to identify and mitigate barriers to physician learning. A standardized and widely used survey and collaborative learning framework could drive program improvement through peer benchmarking and shared learning, particularly if supported by the ABMS.
Although there are many reports of successful Part 4 MOC projects, there are few data on the effectiveness of large-scale Part 4 MOC programs.12,13 Our results from a pediatric setting are similar to those of other studies, which supports the value of Part 4 MOC programs in adult care settings. In a survey of ∼30 000 family medicine physicians, Peterson et al14 found that 78% of participants in Performance in Practice Modules reported that they would change patient care; 90% would continue QI activities, and 90% found the program to have relevance to their practice. A study examining the effectiveness of the American Board of Ophthalmology’s Part 4 program reports that participants made 80% improvements in process and 38% improvements in outcome measures, and 74% of participants rated the program as good or excellent.15 Although improved patient outcomes is one of the goals of the Part 4 MOC, and we demonstrated some improvement in physician perceptions of this domain, we did not identify a strong link. One explanation is that pre-participation survey responses were mostly agreeable and that a ceiling effect prevented a discernable change in all survey questions. Moreover, our mixed results may be expected because QI activities have varied contextual factors, and their immediate effects on patient outcomes are difficult to perceive or measure.
The highest level of evaluation for medical education projects, including MOC activities, is patient-centered outcomes. Because of the challenges and limitations of outcomes based research in medical education, few studies examining the impact of Part 4 MOC programs have reported improved clinical outcomes in care.3,12,16,17 Although learner-centered outcomes (satisfaction and learning) rank lower on Kirkpatrick’s 4-level model, they still hold value as proximal and iterative goals to achieve better patient outcomes to assess the effectiveness of a Part 4 MOC program.18 It remains to be seen if our efforts, or those of the Part 4 MOC, will ultimately lead to large-scale improvements in health outcomes; however, it is unlikely that improvement will happen without first achieving physician engagement in MOC and QI in addition to the ability to apply QI methods. Systematic survey analysis of Part 4 Programs could evaluate a project’s or program’s progression toward meeting these learning milestones.
A number of physician surveys report dissatisfaction with MOC, often citing concerns about relevance and cost.3,19–21 This discontent has led toward a movement among many state legislatures to ban the use of MOC as a criterion for hospital credentialing. This threatens the ability of physicians, hospitals, and governing bodies to demonstrate a commitment to up-to-date and quality care for patients and families. In response, the ABMS and many of the boards are actively trying to improve the Part 4 MOC with more flexibility in project design and management. In this study of a large ABMS Portfolio Program, we demonstrate that value. On the basis of feedback from physician focus groups, the American Board of Pediatrics has launched many new pathways designed to simplify the process and offer more flexibility and relevance to pediatricians in smaller practices.22
Comparative data reflecting the participant experience across projects and programs are needed, particularly as the Part 4 MOC evolves to meet the ever-changing needs of physicians as both professionals and life-long learners in a dynamic health care environment. Without these data, the boards, participating physicians, and public will unlikely know if and under what circumstances these changes are leading to desired improvements. The ABMS must find ways to motivate physicians to engage in the responsibility of (1) demonstrating professional competence, (2) continually learning to improve the quality of their patient care, and (3) becoming facile with the methods to do so in their own practice. For some physicians, their employer or affiliated organization sponsors and facilitates these activities. For example, in this study, our academic center sponsors the MOC program and other professionalism and improvement activities. Effective solutions for physicians in small private practices may require a different structure, given the varying practice settings, geography, and data that are available. A standard evaluation, such as the MOC-PEAKS, can be used to close the Part 4 MOC experience gap by helping to identify the best projects or project designs for each practice condition.
This study has a few limitations. First, these results may not be generalizable to other settings or certifying boards. In this study, of physicians participating in the SCH MOC Program, most were pediatricians, and most practiced in inpatient settings in the Pacific Northwest. SCH financially supports the MOC program (which is managed by QI and medical education experts) and actively seeks to align MOC projects with the needs of the patient, physician, and organization. This may have also influenced the relatively higher baseline positive attitude, compared with previous MOC surveys.3,19–21 Other programs may differ in project design, participation criteria, organizational leadership, and participant support, which could influence the educational effectiveness of their projects. It will be important to further study the performance of this survey by analyzing data collected in other settings and with other groups of physicians, particularly those in community-based practice or with fewer QI resources. Second, the pre-participation survey responses were mostly positive or neutral. Therefore, the effectiveness of participation may be underestimated in this study and higher in situations where baseline responses are less favorable. Finally, we surveyed physician perceptions of QI method applications and impacts on patient care, which can be over- or underestimated.
ABMS Part 4 MOC Programs need data to inform improvement efforts. We successfully developed and implemented a survey to evaluate the learning effectiveness of the Part 4 MOC from the physician perspective. We demonstrate that a pediatric hospital-based MOC program can operationalize an effective and rewarding Part 4 MOC experience that meets the needs of a diverse group of practicing physicians.
Future systematic evaluation of all Part 4 MOC activities and programs will create comparative data that can be used to benchmark Part 4 improvement on a large scale. Meaningful projects, particularly those that demonstrate improvements in patient care, could then be identified and promoted across a variety of practice settings.
Thank you to Mary Alida Brisk for her help with survey development and the focus groups. Thank you also to the physicians of SCH and the surrounding region for their dedication to improving patient care and continual learning.
FINANCIAL DISCLOSURE: The authors have indic+ated they have no financial relationships relevant to this article to disclose.
FUNDING: No external funding.
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
- ↵Institute of Medicine (US)Committee on Quality of Health Care in America. In: Kohn LT, Corrigan JM, Donaldson MS, eds. To Err is Human: Building a Safer Health System. Washington, DC: National Academies Press; 2000
- ↵American Board of Medical Specialties. Standards for the ABMS Program for Maintenance Certification (MOC). 2014. Available at: www.abms.org/media/1109/standards-for-the-abms-program-for-moc-final.pdf. Accessed September 13, 2016
- American Board of Internal Medicine. ABIM announces immediate changed to MOC program. 2015. Available at: www.abim.org/news/abim-announces-immediate-changes-to-moc-program.aspx. Accessed September 13, 2016
- Sandhu AT,
- Dudley RA,
- Kazi DS
- Dillman DA,
- Smyth JD,
- Christian LM
- Miller MR,
- Griswold M,
- Harris JM II.,
- et al
- Peterson LE,
- Eden A,
- Cochrane A,
- Hagen M
- Wiggins RE Jr.,
- Etz R
- Galliher JM,
- Manning BK,
- Petterson SM,
- et al
- Kirkpatrick DL
- Nichols DG
- Copyright © 2017 by the American Academy of Pediatrics