Calibration plots for risk prediction models in the presence of competing risks

Thomas A Gerds, Per K Andersen, Michael W Kattan

23 Citationer (Scopus)

Abstract

A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems.

OriginalsprogEngelsk
TidsskriftStatistics in Medicine
Vol/bind33
Udgave nummer18
Sider (fra-til)3191–3203
Antal sider13
ISSN0277-6715
DOI
StatusUdgivet - 15 aug. 2014

Fingeraftryk

Dyk ned i forskningsemnerne om 'Calibration plots for risk prediction models in the presence of competing risks'. Sammen danner de et unikt fingeraftryk.

Citationsformater