Community-Wide Assessment of Intensive Care Outcomes Using a Physiologically Based Prognostic Measure: Analysis

8 Aug
2014

Community-Wide Assessment of Intensive Care Outcomes Using a Physiologically Based Prognostic Measure: AnalysisIn addition to predictions based on the national normative sample, the predicted risk of in-hospital death was reestimated based on the current data set. This entailed the development of a second multivariable equation using logistic regression that included the APS score, admission source, and ICU admission diagnosis. Admission source and diagnosis were expressed as n-1 indicator variables, where n is equal to the number of categories for each of these elements. Because the risk of death did not increase in a linear manner with increasing APS, we entered APS into the logistic regression model as both a continuous variable and a series of 15 indicator variables for specific ranges of scores (eg, < 20, 20 to 29, 30 to 39). The addition of the indicator variables better fit the curvilinear relationship between APS and in-hospital death, and improved model calibration as described below when compared to a model with just the continuous variable.
To examine the potential impact of interhospital variation in discharge triage practices, a further set of risk predictions was developed based on the local sample, but excluding 14,192 patients who were discharged to skilled nursing facilities, physical rehabilitation centers, and nursing homes. The predictions were also determined from a logistic regression model that included the same independent variables as the prior model. allergy relief

Analysis
Discrimination of the risk predictions from the national and local samples was by receiver operating characteristic (ROC) curve analysis, which is a measure of the proportion of times the risk of death was higher in patients who died than patients who were discharged alive. Calibration (ie, goodness of fit) of the two sets of risk predictions was assessed by the Hosmer-Leme-show statistic, which compares observed and predicted numbers of deaths in deciles of increasing risk.
Performance of individual hospitals was summarized using a standardized mortality ratio (SMR). The SMR was the observed hospital mortality rate divided by the mean predicted mortality rate, as determined by aggregating patient-level risk predictions from the locally derived multivariable equations. An SMR > 1.0 indicates an observed death rate that is higher than expected (ie, lower performance), whereas an SMR < 1.0 indicates an observed death rate that is lower than expected (ie, higher performance). Confidence intervals around hospital SMRs were estimated by calculating exact 99% limits around the observed hospital mortality rate, and dividing this by the mean predicted mortality rate, which was assumed to be a constant. Because of the large sample sizes, we chose a conservative cutoff of p < 0.01 to determine which hospital SMRs statistically differed from 1.0.

top