BACKGROUND AND OBJECTIVES: Leaders of pediatric hospital medicine (PHM) recommended a clinical dashboard to monitor clinical practice and make improvements. To date, however, no programs report implementing a dashboard including the proposed broad range of metrics across multiple sites. We sought to (1) develop and populate a clinical dashboard to demonstrate productivity, quality, group sustainability, and value added for an academic division of PHM across 4 inpatient sites; (2) share dashboard data with division members and administrations to improve performance and guide program development; and (3) revise the dashboard to optimize its utility.
METHODS: Division members proposed a dashboard based on PHM recommendations. We assessed feasibility of data collection and defined and modified metrics to enable collection of comparable data across sites. We gathered data and shared the results with division members and administrations.
RESULTS: We collected quarterly and annual data from October 2011 to September 2013. We found comparable metrics across all sites for descriptive, productivity, group sustainability, and value-added domains; only 72% of all quality metrics were tracked in a comparable fashion. After sharing the data, we saw increased timeliness of nursery discharges and an increase in hospital committee participation and grant funding.
CONCLUSIONS: PHM dashboards have the potential to guide program development, mobilize faculty to improve care, and demonstrate program value to stakeholders. Dashboard implementation at other institutions and data sharing across sites may help to better define and strengthen the field of PHM by creating benchmarks and help improve the quality of pediatric hospital care.
Pediatric hospital medicine (PHM) was first recognized as a specialty by the American Academy of Pediatrics in 1999. Since that time, the specialty has seen rapid growth with a goal to deliver high-quality, cost-effective care.1 The majority of PHM programs are paid by the hospital or department for coverage during hours when clinical revenue does not cover the cost.2–6 Administrations are willing to continue to pay for this not only because it is a necessary service but also because of the program’s ability to decrease cost and length of stay without decreasing quality or patient satisfaction measures.1,4,6–12 As hospital finances are getting tighter, it becomes important for a PHM program to support these perceptions of value with data.13
In 2005, the American Academy of Pediatrics defined 6 guiding principles for PHM program development. Among these was the recommendation to collect data and assess outcomes to monitor performance.14 This was reiterated in the most recent publication of guiding principles for pediatric hospitalists in 2013.15 The Society of Hospital Medicine published a white paper describing performance metrics for adult hospitalist groups in 2006.16 They then encouraged dissemination of these data to hospitalist group members and stakeholders in 2014.17 In 2009, PHM leaders met at a Strategic Planning Roundtable to discuss the mission, vision, and goals for the field. They generated specific initiatives to enhance the field including the proposal to create a clinical dashboard template to enable programs to compare, monitor, and improve their performance.18 The PHM Dashboard Committee created this template in 2012 including descriptive, quality, productivity, resource utilization, and sustainability measures.19
A clinical dashboard is a visual representation of real-time performance indicators displaying the status of a division’s major functions at a glance. The metrics used should address functions that are most important to the division and its key stakeholders and are depicted across time. A good dashboard enables comparisons, visualization of trends and resource distribution, and identification of relationships between process components. It provides an opportunity to align strategies with departmental and hospital goals and inform decisions.20,21 Although previous publications have demonstrated tracking quality metrics, core measures, and/or value-added activities, there are no publications of PHM groups implementing a dashboard that follows the broad range of metrics recommended by the PHM Dashboard Committee or tracks measures across multiple sites.7,22,23
Our division meets regularly with the administrations of our hospital partners to assess the program but found we were limited to reviewing a narrow set of productivity metrics. We therefore aimed to create a dashboard including a broad range of metrics that we could use to inform our division and key stakeholders, including hospital administrations, about our work. We hypothesized that the best way to enhance our ability to prove our value as well as to improve on it was to create visibility and accountability for these metrics.
Our objectives were as follows:
to develop and populate an inpatient pediatric dashboard to demonstrate productivity, quality, group sustainability and value added for an academic division of PHM over time and across a network of 4 community and tertiary care hospital inpatient sites;
to share dashboard data with division members and administrations to improve performance and guide program development; and
to revise the dashboard to optimize its utility for program leaders.
At the time of this study, our division consisted of 30 hospitalists and staffed the inpatient units of 1 tertiary care hospital and 4 community hospitals. At the tertiary care hospital, we staff a general pediatric inpatient unit and a consult service. In the community setting, we provide 24-hour in-house coverage of the pediatric floors, normal newborn nursery, and emergency department consults, and nighttime and weekend coverage of the delivery room and special care nursery. The pediatric volume across the 5 sites ranges from 400 to 1000 discharges per year and 900 to 2300 deliveries per year. The hospitalists each cover 1 primary community hospital and spend 2 weeks per year attending at the tertiary care center. This is built into the full-time equivalent (FTE) calculation for each site.
Our community hospitals are partner affiliations, and each site has its own administration and electronic medical record system that does not interface with that of the other sites. There are varying degrees of computerized physician order entry and documentation with some sites still relying on paper orders and/or documentation. For the purposes of this dashboard pilot, we included the tertiary care hospital and 3 of the community hospitals because we experienced termination of 1 hospital affiliation and creation of a new 1 during the timeframe.
On the basis of a literature review that explored the purpose and content of a PHM dashboard, our division leaders proposed a dashboard for group data that would monitor recommended program outputs as well as metrics supporting strategic goals for the division. We collaborated with a member of the hospital quality and safety department to identify metrics being collected by hospital administrations that aligned with PHM Dashboard committee’s recommended metrics to make them easily retrievable. Division leaders reviewed these metrics and reached an informal consensus on which metrics to collect. We grouped the metrics into the following 5 categories: descriptive, productivity, quality, group sustainability, and value added (Supplemental Table 3); we then suggested a plan for frequency of data collection based on suspected seasonal trends and feasibility of collection (Table 1). We chose not to include resource utilization metrics because these data were not obtainable in our current system. We wanted to create a complete picture of the services we provide at each hospital to inform comparisons between hospitals, so we included a larger number of metrics that describe the hospital but do not change over time such as inpatient subspecialties available. We also wanted to capture all activities that our hospitalists participate in beyond their clinical duties, so we have a larger number of value-added metrics as well.
We assessed the feasibility of data collection by collecting initial data from a single community site. We identified the departments and administrators who had access and could share the data as well as the composition and definition of each metric available. Administrative data were collected by trained staff at each hospital. For chart reviews, we developed a standardized training module including exclusion criteria and trained a pediatric hospitalist at each site. We used regular check-in discussions with all sites to ensure consistency of chart review methods over time. When this proved overall successful, we identified point people and requested equivalent data at the other hospital sites. These point people included the division billing administrative assistant, practice managers, nursing directors, quality departments, finance and business departments, hospitalist site directors and division leader, and the individual hospitalists.
In July 2012, we requested data from October 2011 through June 2012 from the 4 clinical sites. We then collected real-time data on a quarterly basis. We reviewed the data from all sites and made modifications to the data collection process to enable collection of comparable data across all 4 hospitals and streamline the process as much as possible. For example, readmission and direct admission rates were not being collected at all sites, so we added fields on our billing forms to capture these data. We added, modified, and dropped several metrics based on need or feasibility of obtaining these data as well as alignment with current literature and our program’s goals. For instance, we added a metric related to surgical comanagement after we initiated this service at 3 of our sites. The 30-day readmission rate was modified to a 7-day readmission rate on the basis of literature suggesting this to be a more robust quality measure to capture pediatric readmissions for acute conditions.24 We dropped asthma readmission rates because not all quality departments were tracking this metric.
We used each hospital site as a unit of analysis for each dashboard metric. Although we were able to collect some of the metrics at the individual level, many of the metrics reflect the efforts of the team rather than the individual, and our goal of the dashboard was to provide feedback at the program level. We displayed the data in 3-month increments. We also summated these data annually to increase the validity of the low denominator metrics such as Children’s Asthma Care (CAC) compliance, which is collected only for Medicaid patients, and patient satisfaction surveys that depend on return rates to make the data more meaningful.
We entered the data into an Excel database. We used this database to create graphs and scorecards to share the data. The display and inclusion of the data being shared depended on the audience. For example, for review of data within the division, we included all 4 clinical sites in graphs depicting key metric outcomes over time. We distributed scorecards biannually at our division meetings (Fig 1). These scorecards focused on metrics that may be influenced by physician behavior (ie, CAC-3 compliance and newborn discharge orders before 10 am) and highlighted individual hospitalists’ value-added activities. We also included productivity data as a point of reference for group members. When sharing data with community hospital administrations, however, we presented only data for the individual site.
We collected quarterly data from October 2011 to September 2013 for 17 descriptive metrics, 10 productivity metrics, 18 quality metrics, and 8 group sustainability metrics. We were successful in collecting 100% of our descriptive, productivity, and group sustainability metrics. We were only able to collect 76% of our quality metrics. We used SurveyMonkey to collect data on 18 value-added metrics from our individual hospitalists and had a 68% response rate (Table 1). We found comparable metrics across all sites for productivity, group sustainability, and value-added domains; however, only 72% of all quality metrics are tracked in a comparable fashion across sites. For example, different hospitals were collecting different physician-related patient satisfaction metrics, and 1 of our community sites was not collecting patient satisfaction scores from pediatric patients at all.
Our dashboard has served a number of purposes. From our productivity data, we observed a decrease in inpatient discharges across all community sites (Fig 2). This decrease was of particular concern to the administration at 1 of our hospital sites. Identifying that this was not isolated to this hospital helped us frame our conversations with administrators. In addition, we were able to use our productivity data to redirect the administration’s attention to our work Relative Value Units (wRVUs) rather than our discharge rates. While inpatient discharges decreased, our newborn coverage increased, and our overall wRVUs remained relatively stable (Fig 2). Furthermore, seeing the data prompted us to look more closely at our documentation and billing practices to support accurate determination of observation and inpatient stay.
Through our dashboard, we were able to highlight efficiency in newborn discharges as an indicator of quality to our hospitalists as a common goal. At 2 community sites, before the dashboard our newborn discharge rate by 10 am for newborns and mothers ready for discharge was 72%, and by the end of this pilot by the end of this pilot, it was 97% (Fig 3). There were multiple interventions during this time, including sharing data with division members, adding a nurse practitioner at 1 site, and emphasizing the importance of night hospitalist rounding on newborn discharges at another. During the same time period, there was no change in performance at Hospital 2, and the dashboard helped direct our attention toward improving timely discharges at this site.
Our group sustainability data supported the addition of clinical FTEs at 1 of our sites. In Fig 4, Hospital 1 was experiencing high burnout and hospitalist turnover rates at the onset of dashboard data collection. The decision to add FTEs to Hospital 1 was made before the installation of our dashboard; thus, the finding of workload approaching that of other sites was not a direct result of our dashboard. Rather, our dashboard verified that the addition of manpower was justified, and when the hospital decreased the FTEs for the summer season after completion of this dashboard pilot, we now had the data to recommend against this in future years as wRVUs/FTE began to climb well above other sites again.
We also saw increased participation in value-added activities after sharing the first year’s data with the division. Participation in hospital committees increased from 17 hospitalists to 21, and grant funding increased from 5 hospitalists to 8 hospitalists. The visibility of the value-added activities of individual hospitalists appeared to be a motivating factor in this. In addition, seeing how each site distributed the various responsibilities may have encouraged some site leaders to delegate more responsibility to those interested in the group. Because there is minimal overlap of our hospitalists across hospital sites, these differences were not obvious before dashboard implementation.
We demonstrated the feasibility of implementing a pediatric inpatient dashboard across 3 community hospitals and 1 tertiary care center, each with a unique medical record system and administrative personnel. We obtained multiple metrics to characterize each of the 5 domains because a single metric would be insufficient to describe performance given the broad scope of practice. Collecting data on multiple domains provided a cross-sectional view of the program and allowed for simultaneous evaluation of metrics such as wRVUs/FTE and hospitalist turnover to gain insight into how positive changes in one domain may contribute to negative changes in another. These data provided context and allowed us to better anticipate the benefit and cost of decisions we were making. We also collected program descriptive metrics annually and anticipate that hospitals interested in comparing their program outcomes to others would use this information to identify similar programs for comparison. To support other programs’ efforts to create their own dashboard, we share a list of lessons learned (Table 2).
By including 4 inpatient sites, we were able to make comparisons and identify when variations were system-wide rather than site-specific and use this information to guide program development. Our 3 community programs are similar in staffing and scope of service, so making comparisons between these programs was reasonable. As displayed in Fig 4, comparisons are less useful when looking at a tertiary care site compared with community hospital sites due to differences in staffing and patient acuity. Alternatively, the longitudinal data collection allowed each site to compare with itself over time which controlled for many hospital-level factors contributing to outcomes. We found the comparisons between sites useful when providing feedback to physicians, and although we cannot be certain of causality, we believe that sharing the data led to changes in physician practice.
As we continue to collect data, we recognize more fully how these data can be useful to our group. We continue to identify new collectible measures that may affect our quality, productivity, or support from our key stakeholders. Going forward, we plan to modify and expand our metrics to include factors such as disease-specific quality measures, referring physician satisfaction data, and trainee feedback on hospitalist teaching. We are hopeful that PHM leadership can use our dashboard success as a model for creating a national online dashboard.
As with all projects that rely on administrative data, our dashboard is not without limitations. Our data validity depends on how accurately it is recorded. For example, our inpatient discharge data are only accurate if all physicians bill appropriately and patients are listed under the correct attending physician and service. In addition, the data collection process is hindered every time there is turnover of administrative staff due to the need to orient new personnel to the data we are collecting. Data with small denominators, such as our CAC measures and patient satisfaction scores, create additional challenges in using metrics to drive improvement.25–27 To help people interpret the data and trends, we found that it was important to indicate when a large decline or improvement in performance was due to a single case. Because of variation in data collection and reporting among different hospitals, there are limitations to data comparability between sites. In addition, data collection from multiple hospital administrations is extremely time-consuming and labor-intensive. This not only puts an added burden on the division, it also leads to delayed reporting.
Finally, the utility of our dashboard is limited by the current lack of evidence and consensus for quality metrics as well as benchmarks within PHM that lead to improvement in health outcomes.22,27–32 We anticipate that this will also be the largest barrier to PHM-wide dashboard implementation. In addition, this lack of performance benchmarks limits conversations with hospital administrations regarding the value that we add. We are excited by the recent literature starting to fill this gap in the field and encourage this to continue.32–36 The ability to compare performance to national standards would not only better define the field of PHM but also has the potential to drive the field toward improved quality and efficiency.
Clinical dashboards have great potential utility for pediatric hospital medicine groups. Dashboard implementation at other institutions and sharing data across sites would help to better define and strengthen the field of PHM and could lead to establishment of benchmarks. It would help to identify trends in the care of pediatric patients and drive improvement in the quality of pediatric hospital care. Although creating a dashboard requires a substantial investment of time and resources, it ultimately has the potential to guide program development, mobilize faculty to improve care, and demonstrate the value of the program to stakeholders.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
FUNDING: No external funding.
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
Dr Fox modified the initial dashboard design, collected all data, performed data analyses, and drafted the initial manuscript; Dr Walsh provided quality improvement guidance and expertise; Dr Schainker conceptualized and designed the initial dashboard, approved modifications of the dashboard, and informed data analyses; and all authors approved the final manuscript as edited.
- Children’s Asthma Care
- full-time equivalent
- pediatric hospital medicine
- work Relative Value Units
- Wachter RM
- Landrigan CP,
- Conway PH,
- Edwards S,
- Srivastava R
- Bellet PS,
- Whitaker RC
- Landrigan CP,
- Srivastava R,
- Muret-Wagstaff S,
- et al
- Percelay JM,
- Strong GB
- ↵Section on Hospital Medicine. Guiding principles for pediatric hospital medicine programs. Pediatrics. 2013;132(4):782–786
- ↵The Society of Hospital Medicine’s Benchmarks Committee. Measuring Hospitalist Performance: Metrics, Reports, and Dashboards. Philadelphia, PA: Society of Hospital Medicine; 2006
- Hain PD,
- Daru J,
- Robbins E,
- et al
- McFadden P
- Briggs J
- Paciorkowski N,
- Pruitt C,
- Lashly D,
- et al
- Berry JG,
- Zaslavsky AM,
- Toomey SL,
- et al
- Bardach NS,
- Vittinghoff E,
- Asteria-Peñaloza R,
- et al
- Shaller D
- Shen MW,
- Percelay J
- Schuster MA
- Parikh K,
- Hall M,
- Mittal V,
- et al
- Ralston S,
- Comick A,
- Nichols E,
- Parker D,
- Lanter P
- Copyright © 2016 by the American Academy of Pediatrics