OBJECTIVES: Our goal was to develop a comprehensive performance tracking process for a large pediatric hospitalist division. We aimed to use established dimensions and theory of health care quality to identify measures relevant to common inpatient diagnoses, reflective of current standards of clinical care, and applicable to individual physician performance. We also sought to implement a reproducible data collection strategy that minimizes manual data collection and measurement bias.
METHODS: Washington University Division of Pediatric Hospital Medicine provides clinical care in 17 units within 3 different hospitals. Hospitalist services were grouped into 5 areas, and a task group was created of divisional leaders representing clinical services. The group was educated on the health care quality theory and tasked to search clinical practice standards and quality resources. The groups proposed a broad spectrum of performance questions that were screened for electronic data availability and modified into measurable formulas.
RESULTS: Eighty-seven performance questions were identified and analyzed for their alignment with known clinical guidelines and value in measuring performance. Questions were distributed across quality domains, with most addressing safety. They reflected structure, outcome, and, most commonly, process. Forty-seven questions were disease specific, and 79 questions reflected individual physician performance; 52 questions had electronically available data.
CONCLUSIONS: We describe a systematic approach to the development of performance indicators for a pediatric hospitalist division that can be used to measure performance on a division and physician level. We outline steps to develop a broad-spectrum quality tracking process to standardize clinical care and build invaluable resources for quality improvement research.
There is an urgent need for development of quality measures in pediatrics. The Institute of Medicine (IOM) and the National Research Council report on child health care quality recognizes that development of pediatric quality measures lags behind that for adults.1 There have been significant recent increases in the number of endorsed measures for outpatient and preventative care of children and adolescents.2 However, with the exception of asthma management, established inpatient pediatric measures often focus primarily on intensive and surgical care, as well as general processes such as infection control and readmission rates.
Pediatric hospitalist organizations and individuals have recently made considerable efforts to look at quality indicators for isolated conditions and processes,3,4 but there are no publications that describe a systematic and comprehensive approach for developing a process to assess quality of care within a pediatric hospitalist program. We describe an internal process for the development of performance indicators for our pediatric hospitalist division. We used health quality theory, established approaches to performance measurement, and evidence-based medicine as our guides to this development.
Scope of Hospitalist Service
The Division of Pediatric Hospitalist Medicine at Washington University School of Medicine is staffed by 50 physicians who provide clinical care at St Louis Children’s Hospital (SLCH), a large pediatric academic center with 15 000 admissions per year, and 2 community hospitals (Missouri Baptist Medical Center and Progress West HealthCare Center). All physicians provide clinical coverage at these 3 sites, covering 17 hospital units. At SLCH, coverage includes 7 inpatient units, the emergency department (ED), 2 sedation units, a newborn nursery, and a referral call center facilitating patient transfers at SLCH. Our physicians also provide care in the inpatient pediatric unit, the ED, and the newborn nursery at each community hospital.
We assessed our need for performance tracking and identified the following goals: (1) to develop performance indicators that can be defined within the accepted IOM dimensions of quality health care, which include safety, effectiveness, efficiency, timeliness, patient-centeredness, and equity; (2) to develop quality measures that are meaningful to pediatric hospitalist clinical care that can be applied to common diagnoses, reflect up-to-date standards of clinical care, can be used to track performance across hospitals and hospital units, can be applied to individual physician performance, and can be used to make changes that will standardize and improve care; and (3) implement a data collection strategy that is reliable and reproducible while minimizing the potential for human error and measurement bias.
The development of performance tracking was divided into 3 phases: (1) identifying performance questions; (2) developing measurable formulas to answer the identified performance questions; (3) working with technical experts to develop an automated data collection process that extracts the necessary data from electronic medical records (EMRs) or administrative data and can be applied to the developed formulas. This article describes the process that we used for the first 2 phases.
Phase I: Approach to Identifying Performance Questions
Our process used a modified Delphi method that was adapted to face-to-face meetings. We divided our hospitalist clinical services into 5 areas based on the clinical care provided: (1) general inpatient care; (2) emergency care; (3) newborn nursery care; (4) medical control; and (5) procedural sedation. All services are provided at all 3 hospitals, with the exception of the medical control service, which is provided only at SLCH. We sought volunteers among divisional hospitalist leaders to represent each of the 5 areas. These leaders comprised the resulting hospitalist quality improvement task force and thus acted as an internal expert panel, committing their time and effort to this project. Group facilitators then educated the panel on health care quality theory, defined the terminology used, including IOM quality characteristics and Donabedian measurement categories, via a 1-hour lecture, written handout, and 2 discussion sessions. The goals of the project were discussed and agreed on by the panel. Group members were then asked to search the American Academy of Pediatrics Web site, UptoDate, and PubMed for available policies, recommendations, and clinical guidelines. In addition, we searched publicly available health care quality databases of endorsed or submitted measures: National Quality Measures Clearinghouse (http://qualitymeasures.ahrq.gov/browse/by-domain.aspx), National Committee for Quality Assurance (http://www.ncqa.org/Home.aspx), and National Quality Forum (http://www.qualityforum.org/Measures_List.aspx).
Task group members were then asked to provide a broad list of performance questions they believed would be meaningful to the evaluation of pediatric hospitalist clinical care. For inpatient topics, the top 5 most common admitting diagnoses for general pediatric patients at SLCH, as provided by the hospital administrative data repository (Hospital Management Information System), were used to guide our selection of performance questions. We also developed a tool to evaluate performance questions based on quality characteristics and value (Table 1).
Phase I: Results
Eighty-seven performance questions were identified by the expert panel, with most questions pertinent to general pediatric inpatient care (Fig 1). All 87 questions were evaluated individually by using the developed tool, and they were discussed and agreed on by the panel. Most of the proposed indicators were related to 1 or more of 7 common themes (Table 2). For inpatient general pediatrics, 26 questions related to the top 5 admitting diagnoses for general pediatric patients at SLCH: bronchiolitis, asthma, pneumonia, gastroenteritis, and cellulitis. The remaining questions were not disease specific (Table 3). Twenty-one inpatient performance questions could be subjected to physician-specific analysis.
A sample worksheet showing all proposed inpatient performance indicators and representative indicators from the other 4 services is shown in Tables 4 and 5, and a summary of their distribution is shown in Table 6. The majority of indicators related to multiple IOM domains, most commonly safety (63%) and effectiveness (64%). We also assigned an “IOM value” score to performance indicators representative of how many IOM domains they reflect. For example, an indicator that looked at ED head computed tomography (CT) utilization for “minor head injury” diagnoses related to patient safety (risk of radiation), effectiveness (low yield and not indicated), and efficiency (unnecessary cost) and received a score of 3. Among Donabedian categories, process was most represented (78%). Ninety-one percent of performance indicators can be potentially used to assess individual physician performance.
Phase II: Approach to the Development of Performance Measures
Phase II involved turning performance questions and statements into measurable entities. Turning performance indicator questions into measures consisted of 2 main steps: (1) identifying components that can be expressed as numerical values that directly or indirectly answer the quality question; and (2) identifying electronically available data that can be used to provide these numerical values. All of the identified questions can be analyzed retrospectively through rigorous chart review. We wanted to avoid manual data collection because it is time-consuming and hard to perform routinely and on a large scale.1 We performed additional screening of all performance indicators for likely availability of necessary information in EMRs, administrative data, or billing records. During the initial screening process, we discussed the relevant components of identified performance questions and the way these data components are recorded; specifically, whether they are entered into some kind of electronic record or manually written into patient charts.
Phase II: Results
We assumed that patient age and primary diagnoses are always available electronically regardless of service or hospital; these are common components of “administrative” data tracked by all hospitals for billing and other purposes. Medication orders and administration can also be tracked independently of EMRs through the pharmacy database. Diagnosis and procedural information can also be obtained through billing records that use and record International Classification of Diseases and Current Procedural Terminology codes. SLCH inpatient and sedation units’ orders and documentation of vital signs, intake, and output, as well as laboratory results, are entered into Sunrise Clinical Manager (Allscripts, Chicago, IL). SLCH ED uses the Wellsoft Emergency Department Information System (Wellsoft Corporation, Somerset, NJ) for all patient charting and results, but the history and examination details are usually “free-typed” and cannot be easily identified without individual chart review. Both community hospitals use Allscripts’ Emergency Department Information System for all charting and results, but physicians usually free-type history and physical examination details rather than using prebuilt check-boxes and fields. Inpatient and nursery patients’ vital signs, weight, intake, and output in both our community hospitals are documented in HorizonWP Physician Portal (McKesson Corporation, San Francisco, CA).
We used this information to screen 87 performance indicators and identified 42 (48%) that could be potentially answered by using available electronic data with an additional 10 that could be extracted electronically with small modifications to physician data entry (Tables 4, 5, and 6). All emergency care performance indicators were amendable to electronic data collection due to full EMR charting. Group facilitators further prioritized 12 performance indicators for initial development that represented all 5 service lines. This selection was based on the IOM value score, ease of measurement, and, in some cases, institutional or national interest. For example, the head CT utilization for minor head injury indicator was discussed at the institutional Quality and Safety Committee meeting and supported internally by our radiology department and ED. The use of bronchodilators in bronchiolitis was a benchmark quality improvement study via the Value in Inpatient Pediatrics Network collaborative. These proposed indicators were expressed as mathematical ratios with identified definitions and data sources. Table 7 provides an example of this process.
The need to develop pediatric quality indicators is universally recognized.1 The approach that uses evidence-based medicine and an “expert panel” to look at possible quality measures has been taught by quality experts and used nationally and internationally.27–30 We describe the application of a similar process to the development of performance tracking for a large academic pediatric hospitalist division. The advantage of our approach is that it allowed us to look at our clinical care comprehensively rather than focus on 1 disease or measure at a time. As a result, we were able to prioritize the development of performance indicators based on the totality and complexity of our clinical services, selecting them based on both importance and feasibility of measurement.
The objective of examining our clinical care quality consistently across service locations was critical to our process. All of the proposed performance indicators, with the exception of the medical control service, reflect general measures (such as cost of care or length of stay) and conditions that are common to a large quaternary center, as well as to our community hospitals. SLCH has an established scorecard and quality improvement process that is used for internal measurement as well as for reporting and accreditation. However, most entries of the scorecard focus on critical care and universal measures not directly related to hospitalist clinical decision-making (eg, hand hygiene, central line infections). These measures are endorsed and accepted universally for large pediatric centers, partially due to the fact that they have been developed and validated for “adult medicine.”31,32 By default, they are not specific to pediatrics. Understandably, these measures reflect very important safety issues, and they need to be addressed and tracked. However, because they concern intensive care, they often are not applicable to community hospitals that do not provide intensive care services. Alternately, many “general” adult clinical process measures can be easily applied to community hospitals but not to pediatric patients. Because of this gap, our community hospitals struggle with the ability to track performance and quality for their pediatric patients, and their scorecards do not include the board, regardless of hospital or unit. This approach adds an additional advantage for a program with a large group of physicians providing care at multiple sites throughout the year: we can compare performance between sites as well as the performance of individual physicians across locations of identical care.
In addition to focusing on cross-site measures, our method was a grassroots process that included many division hospitalists. It required their education on health care quality theory and background, and significantly improved their understanding of this subject and appreciation of its importance. One of the common themes that we noticed in this process is the negative view some physicians have regarding attempts to evaluate and measure their clinical work. Some physicians view quality improvement processes as failing to take into account the “art of medicine” or individual preferences, potentially resulting in unwarranted punishment of providers who do not score at goal. During our work with quality improvement, this concern has been raised by both pediatric and adult providers during division meetings, individual discussions, and at quality review committee sessions at our community hospitals when ongoing performance practice evaluation and performance measures were discussed.
To address this concern, our model was designed to involve physicians in creating performance indicators meaningful to them. Because physicians were involved at every step of this development, we tried to account for clinical judgment situations in which the “standard” treatment is not always the best choice. One example we used pediatric-specific clinical process measures. Because most inpatient pediatric encounters in the United States take place in non–children’s hospitals or community hospitals and not in the large academic pediatric centers, this is a gap that must be addressed urgently.33 Our model took the methods and processes established for adult measures and applied those to the pediatric equivalents of common adult diagnoses, with the goal of tracking our clinical performance across in our process was the preferential use of ampicillin to treat patients with uncomplicated hospitalized pneumonia21; in our development process, we discussed exclusion criteria for this clinical performance indicator, such as children with malignancies, cystic fibrosis, and significant other comorbidities. This involvement and the transparency of our development process changed the perception of measuring quality and made our hospitalists more accepting about being evaluated.
A large focus of our approach was to create ways to collect the data electronically. As a rule, hospitalists have a high clinical load and little time for research or routine review of a large number of charts.34,35 With the rapid growth of EMR development and implementation, there is a general consensus that this significant amount of data can and should be used for tracking and analysis development.1,32 Whereas the extraction of data from EMR or administrative databases cannot be performed routinely by physicians, all hospitals have information technology teams who generate administrative reports, cost analyses, mandated scorecards, and best-in-class reports. We found that our hospitals’ administrative and quality committees are interested in working with us. This is especially true of community hospitals, where these committees are increasingly directed to measure quality and rank individual physician performance but often do not have a clear understanding of what should be measured. In our process, we established a mutually beneficial partnership in which we offered them our expertise and help with developing ways to measure clinical care and decision-making and they provided technical expertise and data. The fact that our measures can be easily applied to all of the hospitals within our hospital system, including community sites where we are not present, made our project even more valuable to our nonphysician administrative and quality leaders.
The importance of the partnership between our hospitalist quality leaders and regional hospital quality and informatics leaders cannot be overstated. We found that they valued our own interest in measuring our performance and the broad applicability of our process. They offered technical support and access to data and have been working with us closely to develop the electronic data collection process for our proposed measures. Through our collaboration, we learned to appreciate the complexity of EMRs and the importance of physicians using them as intended, rather than as a notepad. They learned to appreciate the complexity of our medical decision-making process that makes it hard to develop “black and white” performance indicators.
An accepted model of quality management includes a cycle of measurement, assessment, and improvement.27 Although the improvement of clinical processes and care is the ultimate goal, we recognized that the measurement of performance constitutes the necessary first step. We also realized that in pediatric hospital medicine, we often do not have the convenience of taking a validated measure from another field and applying it to our patients. Our initial work attempted to identify clinical processes and outcomes that would be meaningful to our physicians and link them with measurable data by forming a partnership with our information technology team. In addition, our evaluation tool selected proposed performance indicators only if they had a potential to improve clinical process, thus making it feasible to complete the quality cycle. Our next steps focus on validation and subsequent improvement strategies. We are currently working with our subspecialty consultants, quality experts, and our information technology group to collect pilot data, perform rigorous chart review and data analysis, and confirm EMR definitions to assure specificity of our data collection process and its use in performance tracking.
Performance analysis of clinical care is often difficult due to variability and individuality of each patient scenario. Few clinical guidelines are applicable to all scenarios because most allow for exclusions secondary to patient and disease variability. Our approach will allow us to compare our performance and adherence to clinical guidelines between hospitals and hospital units, thus assisting with standardization of care across services. Although the ultimate goal is to improve clinical care and outcomes while decreasing costs, performance measurement is the first necessary step in this process and a prerequisite to the identification of areas of improvement and strategies for implementation of change. All of the proposed measures can be applied to any of our sites, enforcing our commitment to providing patients with the same standard of care regardless of location. We believe that our approach will allow us to identify ways to improve and standardize care. In addition to quality improvement, our process and collected data can be an invaluable foundation for clinical research that will permit us to look at outcomes and cost efficiency, as well as analyze patient and treatment factors that may influence these outcomes.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
FUNDING: No external funding.
- computed tomography
- Emergency Department
- electronic medical record m
- Institute of Medicine
- St Louis Children’s Hospital
- Child and Adolescent Health and Health Care Quality: Measuring What Matters
- McCulloh RJ,
- Smitherman S,
- Adelsky S,
- et al
- Hain P,
- Daru J,
- Robbins E,
- et al
- Engle WA,
- Tomashek KM,
- Wallman C,
- Committee on Fetus and Newborn, American Academy of Pediatrics
- Committee on Infectious Diseases; Committee on Fetus and Newborn,
- Baker CJ,
- Byington CL,
- Polin RA
- Adamkin DH,
- Committee on Fetus and Newborn
- Missouri Revised Statutes
- Missouri Department of Social Services
- American College of Emergency Physicians Clinical Policies Committee; American College of Emergency Physicians Clinical Policies Subcommittee on Pediatric Fever
- Lye PS,
- American Academy of Pediatrics. Committee on Hospital Care and Section on Hospital Medicine
- Kuppermann N,
- Holmes JF,
- Dayan PS,
- et al.,
- Pediatric Emergency Care Applied Research Network (PECARN)
- The Joint Commission
- Orr RA,
- Felmet KA,
- Han Y,
- et al
- National Quality Forum Endorsed Standards
- American Academy of Pediatrics; American Academy of Pediatric Dentistry,
- Coté CJ,
- Wilson S,
- Work Group on Sedation
- American Academy of Pediatrics Subcommittee on Diagnosis and Management of Bronchiolitis
- AAP Statement of Endorsement of Infectious Disease Society of America Guidelines
- Bradley JS,
- Byington CL,
- Shah SS,
- et al.,
- Pediatric Infectious Diseases Society and the Infectious Diseases Society of America
- National Asthma Education and Prevention Program
- Nkoy FL,
- Fassl BA,
- Simon TD,
- et al
- King C,
- Glass R,
- Bresee J,
- Duggan C
- The Joint Commission
- Spath P
- Chen AY,
- Schrager SM,
- Mangione-Smith R
- National Healthcare Safety Network (NHSN)
- McDonald KM,
- Davies SM,
- Haberland CA,
- Geppert JJ,
- Ku A,
- Romano PS
- Merenstein D,
- Egleston B,
- Diener-West M
- Bekmezian A,
- Teufel RJ,
- Wilson KM
- Copyright © 2013 by the American Academy of Pediatrics