Development and Testing of the Pediatric Respiratory Illness Measurement System (PRIMES) Quality Indicators
OBJECTIVES: To develop and test quality indicators for assessing care in pediatric hospital settings for common respiratory illnesses.
PATIENTS: A sample of 2796 children discharged from the emergency department or inpatient setting at 1 of the 3 participating hospitals with a primary diagnosis of asthma, bronchiolitis, croup, or community-acquired pneumonia (CAP) between January 1, 2010, and December 31, 2011.
SETTING: Three tertiary care children’s hospitals in the United States.
METHODS: We developed evidence-based quality indicators for asthma, bronchiolitis, croup, and CAP. Expert panel–endorsed indicators were included in the Pediatric Respiratory Illness Measurement System (PRIMES). This new set of pediatric quality measures was tested to assess feasibility of implementation and sensitivity to variations in care. Medical records data were extracted by trained abstractors. Quality measure scores (0–100 scale) were calculated by dividing the number of times indicated care was received by the number of eligible cases. Score differences within and between hospitals were determined by using the Student’s t-test or analysis of variance.
RESULTS: CAP and croup condition-level PRIMES scores demonstrated significant between-hospital variations (P < .001). Asthma and bronchiolitis condition-level PRIMES scores demonstrated significant within-hospital variation with emergency department scores (means [SD] 82.2(6.1)–100.0 (14.4)] exceeding inpatient scores (means [SD] 71.1 (2.0)–90.8 (1.3); P < .001).
CONCLUSIONS: PRIMES is a new set of measures available for assessing the quality of hospital-based care for common pediatric respiratory illnesses.
Acute and chronic respiratory illnesses account for a substantial proportion of childhood disease burden and health care utilization in the United States.1,2 Respiratory illnesses are the most frequent causes for pediatric hospitalizations.1,2 In 2012, pneumonia was the most prevalent diagnosis among children hospitalized with respiratory illness with 118 934 children being treated for this condition.2 More than 6 million US children (9%) have asthma and 3 million suffer a severe exacerbation annually,3 resulting in 113 840 pediatric hospitalizations, costing approximately $433 million.2 An additional 112 332 children are hospitalized for bronchiolitis annually with costs exceeding $500 million.2
Significant gaps exist between pediatric clinical practice guidelines and the health services provided to our nation’s children.4–9A comprehensive study of outpatient pediatric quality of care found US children on average receive 46% of recommended care.10 Despite relatively standardized and well-publicized recommendations for appropriate asthma care, children with asthma in that study received just 45% of recommended care. Although this level of adherence to standards for outpatient care is problematic, the impact of poor quality of care on hospitalized children, especially those at increased risk for respiratory failure, is likely to have even more serious consequences. Thus, monitoring and improving on processes of care for hospitalized children is essential to ultimately improve their health outcomes.
Although a limited number of process measures for asthma have been developed to evaluate pediatric inpatient quality of care,11,12 there is a dearth of measures for other respiratory illnesses commonly observed in the pediatric emergency department (ED) and inpatient settings. The objectives of this study were to rigorously develop and field test a set of process-of-care quality indicators focused on evaluating the quality of both ED and inpatient care for pediatric respiratory illness: the Pediatric Respiratory Illness Measurement System (PRIMES).
We selected prevalent respiratory conditions observed in the ED and inpatient settings for measure development so that the indicators would apply to a substantial proportion of children cared for in these settings. We first analyzed the Pediatric Health Information System database developed and maintained by the Children’s Hospital Association, which includes ED and inpatient discharge diagnostic data for 45 US children’s hospitals.13 This analysis indicated that asthma, bronchiolitis, and community-acquired pneumonia (CAP) are the top 3 reasons for respiratory illness admissions to these hospitals. Adding hospitalizations for croup to the top 3 respiratory conditions accounted for 16% of admissions to Pediatric Health Information System hospitals nationally. Thus, we selected these 4 conditions for further quality indicator development.
Literature Review and Developing Condition-Specific Quality Indicators
Two research staff reviewed the literature to identify key elements of appropriate diagnosis, treatment, and follow-up care for each of the 4 conditions. A search of Medline, the Agency for Healthcare Research and Quality National Guideline Clearinghouse, and the Cochrane Database of Systematic Reviews was conducted to identify all English language review articles, guidelines, and studies published on each of the 4 respiratory conditions in children relating to hospital-based management between 1999 and 2009. The Medline search used the following terms: asthma, bronchiolitis, croup, pneumonia, aspiration pneumonia, emergency department/room, hospital and hospital management, inpatient, child, diagnosis, management, treatment, and follow-up. The American Academy of Pediatrics (AAP), National Heart, Lung, and Blood Institute, Cincinnati Children’s Hospital Medical Center, and British Thoracic Society Web sites were also searched for relevant guidelines. Selected articles from the reference lists of reviews done to inform guideline development for all 4 conditions also were examined. Potentially relevant citations included in reference lists of articles obtained in the original searches also were retrieved and reviewed
Based on these literature and guideline reviews, staff drafted quality indicators related to the diagnosis, treatment, and follow-up in the ED and inpatient settings for each of the 4 conditions. The draft indicators addressed what constitutes an adequate history and physical examination on admission, which diagnostic tests should be done on admission to the ED or hospital, which treatments should be administered during the course of the illness, which procedures should be performed, tests and treatments that should be avoided, appropriate monitoring during ED admission and hospitalization, and appropriate follow-up of ED and hospital admissions. The level of evidence to support each indicator was formally rated by using the University of Oxford’s Centre for Evidence-Based Medicine method: 1 = randomized control trials; 2 = nonrandomized controlled trials, cohort and outcome studies; 3 = case-control studies; 4 = case-series; and 5 = expert consensus.
RAND–University of California Los Angeles Modified Delphi Method for Final Indicator Selection
The validity and feasibility of the indicators were evaluated by using the RAND–University of California Los Angeles modified Delphi method.14 This method is a structured approach to expert panel deliberations that uses individual panel member ratings of indicators rather than consensus to arrive at recommendations. In the first round of rating, panelists received the literature reviews, quality indicators, and rating sheets for the 4 conditions and were asked to rate each indicator for validity and for feasibility on a 9-point scale (1 = low, 9 = high). Validity was defined to mean that adequate scientific evidence or professional consensus exists to support the indicator, there are identifiable health benefits for patients who receive the specified care, clinicians or hospitals with higher rates of adherence would be considered higher quality, and a high proportion of the determinants of adherence are under the clinician’s or hospital’s control. Ratings of 1 to 3 mean that the indicator is not a valid criterion for evaluating quality; ratings of 4 to 6 mean that the indicator is an equivocal criterion for evaluating quality; and ratings of 7 to 9 mean that the indicator is a valid criterion for evaluating quality. This method of selecting indicators is reliable and has been shown to have content, construct, and predictive validity in other applications.15–19
Feasibility means that a “typical” ED or inpatient medical record (paper or electronic) is likely to contain the information needed to determine eligibility for and adherence to the indicators, and estimates of performance based on medical records data are likely to be reliable. Ratings of 1 to 3 mean that it is not feasible to score the indicator by using data found in the average medical record; ratings of 4 to 6 mean that there is likely considerable variability in the availability of information needed to score the indicator; and ratings of 7 to 9 mean that it is feasible to consistently find information in the medical record to score the indicator or the absence of information itself is a sign of poor quality.
Selecting Members for the Expert Delphi Panel
Nominations for the Delphi panel were sought from several relevant professional societies, including the American Thoracic Society, the American Academy of Allergy, Asthma, and Immunology, the Pediatric Infectious Diseases Society, the AAP, the Academic Pediatric Association (Quality Improvement Special Interest Group and Hospital Medicine Special Interest Group), and the Society for Hospital Medicine. Nine individuals from among those nominated were invited and agreed to participate on the panel. RAND panels have historically consisted of 9 members because small-group analysis suggests that it is difficult to hold a meaningful discussion with larger numbers of panelists.14
Conducting the Expert Delphi Panel
In advance of the panel, a conference call was held with the panelists to orient them to the method, procedures, and requirements for participating in the Delphi panel. Six weeks before meeting in person, the panelists were sent literature reviews and 136 draft quality indicators for the 4 respiratory conditions and asked to provide their initial ratings.
Panelists submitted their round 1 Delphi scores for the 136 indicators to the research team. Before convening for a 2-day in-person meeting in May 2010, panelists were sent the results of their first round of scoring. The results included the distribution of ratings for each indicator on the 9-point scales (without revealing the ratings of specific panelists), the median rating, and a caret indicating the panelist’s own initial rating for each indicator.
The panel discussed indicators if the median validity score was <7 but >3 and/or the median feasibility score was <4, or the mean absolute deviation from the median scores indicated either an indeterminate level of agreement or disagreement among the panelists. Based on these criteria, the panel discussed 70 of the 136 draft indicators. For each condition, after the discussion was completed, the panelists privately rescored all 136 indicators for validity and feasibility.
Analytic Methods: Assessing Panel Ratings
Ratings from the second round of scoring were tabulated for each indicator. The final disposition of each indicator was based on its median validity and feasibility scores. To be considered endorsed by the panel and included in PRIMES, an indicator had to have a median validity rating of ≥7, a median feasibility rating of ≥4, and be scored without disagreement.
We used a statistical method to determine agreement or disagreement among the 9 panelists for each indicator.14 This method tests hypotheses about the distribution of ratings in a hypothetical population of repeated ratings by similarly selected panelists. For 9 ratings, the definition of agreement requires that no more than 2 of the ratings be outside the 3-point region that contains the median. The definition of disagreement is satisfied when ≥3 ratings are in the 1 to 3 region and ≥3 are in the 7 to 9 region. Finally, if the ratings cannot be classified as “with agreement” or “with disagreement,” then they are classified as “indeterminate” and the indicator is retained.
Development of the PRIMES Electronic Data Abstraction Tool
To ensure reliable implementation of the quality indicators, we developed detailed specifications to guide data collection and analysis for each panel-endorsed indicator. Specifications were pilot tested on a sample of paper and electronic medical records from 3 children’s hospitals to determine the feasibility of data collection and scoring. Not all endorsed indicators could be successfully specified secondary to necessary data elements not being reliably available in the pilot sample of medical records to determine eligibility and/or adherence to the indicators.
The specifications were then used to develop an electronic medical record abstraction tool with automated scoring capability. The tool was designed so that abstractors answer a series of questions that flow logically given the organization of the average medical record (eg, all needed vital signs are collected only once regardless of how many indicators require this information for determining eligibility or scoring). Within conditions (eg, asthma), preprogrammed algorithms use data entered by the abstractor (eg, presenting symptoms or vital signs), to determine if a given case is eligible for each indicator. Once a case is determined to be eligible, preprogrammed scoring algorithms determine whether the indicated care was received again based on information entered by the abstractor (eg, medications administered or laboratory tests performed).
Classification of the Indicators
Because the PRIMES tool is designed to produce both individual indicator and aggregate scores, we classified each indicator by site, function, and modality of care. These classifications can be used to create aggregate scores within and across conditions. First, the indicators were classified by site of care: ED or inpatient. Second, the indicators were classified by function of care: diagnosis, treatment, and follow-up. Diagnosis indicators address the process by which physicians make diagnoses (eg, history, physical examination). Treatment includes medications, the decision to admit or discharge a patient, and other interventions (eg, observing response to treatment). Follow-up includes developing a postdischarge care plan (eg, asthma action plan, follow-up visits). Third, the indicators were classified according to the modality by which the care is delivered: history, physical examination, laboratory or radiology study, medication, ancillary therapy (eg, chest physiotherapy), counseling, referrals, and disposition determination. Indicators were assigned to 1 site and 1 function of care, but could have up to 2 modalities (Table 1).
Field Testing of the PRIMES Indicators
Three tertiary care children’s hospitals affiliated with academic institutions and located in large urban areas participated in the field test of the PRIMES indicators. In 1 hospital, all records were electronic, in the second hospital some records were electronic (eg, laboratory data, orders) and some were paper (eg, histories and physicals, nursing flow charts, progress notes), and in the third hospital inpatient records were electronic and ED records were paper. For each hospital, 2 research nurses were trained on the clinical content and use of the electronic abstraction tool by 1 of the authors. After completion of training, the nurses abstracted 4 charts, 1 for each target condition. Their abstractions were compared with gold-standard abstractions developed by the author. Abstractors were considered fully trained when they could reliably abstract the gold-standard medical record for each of the 4 target conditions. For an abstraction to be considered reliable, κ statistics for both indicator eligibility and scoring had to be 0.75 or higher.
At each participating hospital, by using International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes included in hospital discharge data, we identified potentially eligible ED and inpatient discharges occurring between January 1, 2010, and December 31, 2011, and selected a random sample for each condition (see Supplemental Table 5 for a list of ICD-9-CM codes used to select cases for abstraction). Our goal was to sample 200 cases for each condition from inpatient admissions/ED visits occurring at each hospital during the study period (total sample size/hospital = 800). For asthma, bronchiolitis, and croup, the goal was to obtain 50 cases for children discharged to home from the ED and 150 cases for children hospitalized for >1 day. Because our CAP guideline and literature review for PRIMES supported only the development of inpatient quality indicators, all 200 cases for this condition were selected for children hospitalized for >1 day.
Medical Record Abstractions
At each hospital, the 2 trained nurse abstractors were assigned half of the case sample (n = 100) for each condition. Eligibility of each sampled case was confirmed at the beginning of abstraction. To be eligible, the primary discharge diagnosis had to be a PRIMES condition and the child had to fall into the specified age range for that condition: asthma ≥2 and <18 years old, bronchiolitis ≤2 years old, CAP <18 years old, croup ≤6 years old. Children with the following comorbidities were excluded: congenital lung or airway disease, neuromuscular disease, congenital heart disease, immunodeficiency syndromes, cancer, and sickle cell disease. Children also were excluded if they were initially admitted to the ICU. Data for each eligible case were entered by the abstractors into the electronic PRIMES data collection tool and both the raw variables and autogenerated indicator scores were uploaded to a central research database for further analysis.
Nurses at each site performed 10 additional abstractions for each PRIMES condition that were randomly selected from the other nurse’s sample to enable assessment of interrater reliability. Average reliability, indicated by the κ statistic, ranged from substantial to almost perfect20 on 2 levels: the child’s eligibility for care represented by a given indicator (κ = 0.96, SE = 0.01) and the child’s score for that indicator (κ = 0.86, SE = 0.01).
Analytic Methods: PRIMES Scores
Hospital-level summary scores were constructed by using the following approach: the denominator for each score was the total number of children eligible to receive indicated care and the numerator was the number of times that care was received. Several PRIMES summary scores were generated at the hospital level, including condition-specific scores, site of care scores, scores by function of care, and scores by modality of care. Statistical significance in observed variation in scores between the 3 hospitals, as well as within each hospital by condition, site of care, function, and modality, was assessed by comparing mean scores by using the Student’s t-test or analysis of variance as appropriate.
Because some indicators are more challenging to pass than others, when making hospital-to-hospital comparisons, we adjusted hospital-level PRIMES scores to account for the level of difficulty associated with passing each indicator, in lieu of making adjustments for patient case-mix, demographics, or comorbidities.21,22 This “observed difficulty of delivery” (ODD) adjustment is performed for each indicator by subtracting the mean study population pass rate for each indicator from a hospital’s average pass rate for that indicator. The adjusted hospital-level PRIMES condition score (eg, hospital A’s asthma score) is calculated by averaging all of the ODD-adjusted indicator scores for that condition and then adding this adjusted score to the average study population pass rate for that condition. ODD adjustment can result in a score that is >100 in cases in which a hospital achieves a high score on a difficult to pass measure. For example, consider a condition that has 2 quality indicators, with 1 indicator having an average study population pass rate across all 3 hospitals of 80% and a second indicator having an average population pass rate of 20%. The average population pass rate for that condition across all 3 hospitals is 75%, because eligibility for the first indicator is more common than eligibility for the second. A hospital with a pass rate of 80% on the first indicator and a pass rate of 90% on the second indicator exceeds expected scores for the second indicator by 70%. Also, assume that this hospital has a higher eligibility rate for the second indicator. The hospital’s ODD-adjusted score for that condition would be as follows:
This hospital receives a very high score by excelling at satisfying a difficult to pass measure. It is also possible to receive a score <0 with a low pass rate on an easy to pass measure.
All study procedures underwent institutional review board approval at all 3 participating hospitals.
Delphi Panel Results
Overall, 112 (82%) draft quality indicators were endorsed by the Delphi panel. The proportion of indicators endorsed varied by condition, ranging from 69% for CAP to 95% for croup. These results are similar to previous quality indictor development efforts by using the RAND–University of California Los Angeles Delphi method (81% to 95%).23–26 Seventy-six (68%) of the panel-endorsed indicators were successfully specified and field tested (Table 2). Forty-six (60%) of the PRIMES indicators assess care received in the ED setting and 30 (40%) assess inpatient care.
PRIMES Field Test Results
Across the 3 hospitals, 2796 charts were abstracted for the 4 PRIMES conditions with 190 to 350 cases abstracted per condition at each hospital. The goal of 200 cases per condition per hospital was exceeded for asthma (n = 219–350), bronchiolitis (n = 220–301), and pneumonia (n = 202–216); however, the 2-year study period was insufficient to reach this goal for croup in all 3 hospitals (n = 190–249).
Overall PRIMES summary scores were highest (means [SDs] 94.0 [3.5]–95.8 [3.0]) for croup and lowest for CAP (means [SDs] 68.2 [1.0]–88.0 [1.0]; Table 3). Significant between-hospital variations in overall scores were observed for CAP and croup but not for bronchiolitis or asthma (Table 3).
We found significant (P < .001 for all comparisons) within-hospital variation, with ED scores (means [SDs] 82.2[6.1]–100.0 [14.4]) exceeding inpatient scores (means [SD] 71.1[2.0]–90.8 [1.3]) for asthma and bronchiolitis and inpatient scores (means [SDs] 139 [51.0]–170 [38.0]) exceeding ED scores for croup (means [SDs] 94.8 [3.7]–96.6 [4.2]) in all 3 hospitals (Table 3).
Quality scores related to function and modality of care varied significantly both within and between the 3 hospitals (Table 4). Scores related to treatment decisions and appropriate use of medications were among the highest (means [SDs] 88.1 [17.9]–94.1 [14.0]), whereas those related to laboratory/radiology testing were among the lowest (means [SDs] 56.7 [43.4]–73.3 [40.6]).
Although several pediatric inpatient studies have examined either variations in care or adherence to quality standards for a variety of respiratory diagnoses by using administrative claims data,6,8,9 few have conducted detailed quality assessments requiring medical record review. The studies that have used medical record review included limited numbers of quality indicators focused on a single condition (asthma).11,12,27 Development of PRIMES represents a first step in facilitating more thorough and clinically relevant assessments of health care quality provided to children for 1 of 4 respiratory conditions frequently encountered in the pediatric ED and inpatient settings.
We observed variation in performance both between and within hospitals on the PRIMES indicators at the condition level. Between-hospital variation was most marked for CAP. Although the CAP quality indicators are aligned with many of the recommendations in the pediatric CAP clinical practice guideline released in 2011,28 the wider variation in performance observed may be related to the fact that the guideline was released late in our study period (January 1, 2010–December 31, 2011). Expert consensus guidelines for asthma, bronchiolitis, and croup were in place for several years (2005–2007) before the study period and likely had some degree of uptake in clinical practice potentially resulting in decreased variation.29–31 Although we found statistically significant differences in inpatient scores for asthma as well as overall and ED scores for croup across the 3 hospitals, the clinical significance of these differences is questionable given their small magnitude (2 points on the 0–100 scale; Table 3). Inpatient scores for croup also varied significantly across institutions; however, these estimates were based on a small number of cases (37–75 cases per hospital; Table 3) and may not be representative of care in general for this condition at these institutions.
We found significant within-hospital variation in performance between the ED and inpatient setting for asthma, bronchiolitis, and croup in all 3 field test hospitals (Table 3). Better performance in the ED for asthma and bronchiolitis care may reflect a more standardized clinical approach to children presenting to this setting with acute exacerbation of asthma or respiratory distress later diagnosed as bronchiolitis. In most tertiary care pediatric academic medical centers, larger numbers of different providers care for children in the inpatient setting compared with the ED, which may lead to more variation in approaches to care and lower adherence to clinical pathways. For croup, only 2 of 15 quality indicators applied to inpatient care (Supplemental Table 6); thus, obtaining higher average scores required adherence to far fewer processes of care, so it is not surprising that inpatient scores were significantly higher for this condition.
When considering functions and modalities of care (Table 4), performance varied significantly both between and within the 3 field test hospitals. Not surprisingly, and consistent with previous studies examining function and modality, quality scores related to treatment decisions and appropriate use of medications were some of the highest, whereas those related to appropriate laboratory/radiology testing and counseling were low.10,32
This quality measure development process had several limitations. First, expert consensus was the dominant level of evidence supporting PRIMES indicators (Supplemental Table 6); however, this reflects the state of the evidence base for these 4 conditions. In most cases, current pediatric practice guidelines include many recommendations that are based on expert consensus rather than empirical evidence.4,33,34 Because randomized trials may not be feasible to conduct in children, much of quality assessment in pediatrics will continue to focus on care processes that are hypothesized to be associated with better outcomes. However, where possible, outcome validation of process measures should be undertaken and should focus on shorter-term outcomes that are primarily dependent on the medical interventions being delivered. Future validation work for PRIMES should include formal studies to assess the relationship between high levels of performance on these indicators and other established quality measures, such as return to baseline functional status during the month after hospitalization, reduction in return visits to the ED, or fewer 30-day readmissions to the hospital. Until further outcome validation of the PRIMES indicators occurs, hospitals may want to focus their quality improvement (QI) efforts on the indicators with higher levels of supporting evidence (level 1 or 2; Supplemental Table 6).
Second, our field test was limited to 3 tertiary care children’s hospitals, so performance on these quality indicators may be different at other children’s hospitals with lower acuity of care or in community hospitals in which most children receive care for these conditions.2 Further testing of PRIMES in these settings is warranted.
Third, our understanding of the key drivers of observed variation in performance on the PRIMES measures is limited due to a lack of contextual information related to the care settings in which they were assessed.35 Future studies of the PRIMES quality measures should include contextual information to better understand variation in performance observed in different care settings (ED versus inpatient) and/or across different intuitions.
Fourth, the age of the literature review (1999–2009) is a potential limitation. However, review of more recently published guidelines and evidence reviews for these conditions demonstrates that the PRIMES indicators are still well aligned with current recommendations, with 2 notable exceptions.28,36–38 Review of the 2011 pediatric CAP guideline developed by the Infectious Diseases Society of America, indicates testing for acute phase reactants in children hospitalized for CAP may be appropriate under certain circumstances.28 Thus, the PRIMES indicator stating these tests should not be performed (Supplemental Table 6; CAP indicator no. 4) is too stringent and will be removed from the post–field test tool. Review of the 2015 AAP bronchiolitis guideline indicates that β-agonists should not be used to treat children diagnosed with bronchiolitis under any circumstances.36 This is in contrast to the 2006 AAP bronchiolitis guideline statement that suggested treatment with β-agonists was acceptable if there was also an assessment before and after administration that indicated clinical improvement.30 Thus, the bronchiolitis PRIMES indicator regarding treatment with β-agonists will be updated to reflect this more stringent recommendation in the final tool (Supplemental Table 6; Bronchiolitis indicator no. 9). As with all quality measures, periodically reviewing new evidence as it becomes available will be essential to determine whether indicators included in PRIMES require adjustment or deletion.
Based on this limited pilot test, we recommend that PRIMES is appropriate for use to monitor within-hospital QI efforts on individual condition-specific measures included in the tool. The individual quality indicators provide performance information on processes of care that are under the control of health care providers and lend themselves well to QI efforts. Further testing of PRIMES is needed, by using a more representative sample of hospitals to establish the tool’s utility for accountability assessments across hospitals.
Despite the limitations inherent in this measure development and testing process, PRIMES represents a newly developed set of quality indicators that are feasible to implement and demonstrate significant variation in performance both between and within 3 children’s hospitals for various aspects of respiratory care. PRIMES may be a useful tool for monitoring and improving the quality of inpatient and ED care for common pediatric respiratory illnesses.
We acknowledge the contributions of expertise and time made by the Expert Delphi Panel members who participated in the development of PRIMES: Leonard Bacharier, MD, Thomas Boat, MD, Kathryn M. Edwards, MD, John Meurer, MD, MBA, Wayne Morgan, MD, Ricardo Quinonez, MD, Daniel Rauch, MD, Michael S. Schechter, MD, MPH, and Erin Stucky, MD. We also acknowledge the contributions of Cindy Larison, MPH, former data analyst at Seattle Children’s Research Institute, who conducted the data analysis for this study under the direction of author Dr John Adams, Senior Statistician at Kaiser Permanente Center for Effectiveness and Safety Research, Pasadena, CA.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
FUNDING: All phases of this study were funded by National Heart, Lung, and Blood Institute grant 1R01HL088503–01A2, principal investigator: Rita Mangione-Smith, MD, MPH.
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
- Keren R,
- Shah SS
- Leyenaar JK,
- Ralston SL,
- Shieh MS,
- Pekow PS,
- Mangione-Smith R,
- Lindenauer PK
- ↵Centers for Disease Control and Prevention. Data, statistics, and surveillance: most recent asthma data (updated March 2016]. Available at: https://www.cdc.gov/asthma/most_recent_data.htm. Accessed December 20, 2016
- Melnyk BM,
- Grossman DC,
- Chou R,
- et al
- Okelo SO,
- Butz AM,
- Sharma R,
- et al
- Ross RK,
- Hersh AL,
- Kronman MP,
- et al
- Nkoy FL,
- Fassl BA,
- Simon TD,
- et al
- ↵Children’s Hospital Association. PHIS (Pediatric Health Information System) 2016. Available at: https://www.childrenshospitals.org/programs-and-services/data-analytics-and-research/pediatric-analytic-solutions/pediatric-health-information-system. Accessed January 30, 2017
- Brook RH
- Schuster MA,
- Asch SM,
- McGlynn EA,
- Kerr EA,
- Hardy AM,
- Gifford DS
- Wang CJ,
- Jonas R,
- Fu CM,
- Ng CY,
- Douglass L
- Fassl BA,
- Nkoy FL,
- Stone BL,
- et al
- Bradley JS,
- Byington CL,
- Shah SS,
- et al
- Johnson D,
- Klassen TP,
- Kellner JD
- ↵American Academy of Pediatrics Subcommittee on Diagnosis and Management of Bronchiolitis. Diagnosis and management of bronchiolitis. Pediatrics. 2006;118(4):1774–1793
- ↵NAEPP. Guidelines for the diagnosis and management of asthma: expert panel report 3. Bethesda, MD: Department of Health and Human Services, Public Health Service, 2007. NIH Publication No. 08-5846
- May CR,
- Johnson M,
- Finch T
- Ralston SL,
- Lieberthal AS,
- Meissner HC,
- et al
- Bishop J,
- Enriquez B,
- Allard A,
- et al
- Normansell R,
- Kew KM,
- Mansour G
- Copyright © 2017 by the American Academy of Pediatrics