Calibration plots for risk prediction models in the presence of competing risks

Thomas A Gerds, Per K Andersen, Michael W Kattan

23 Citations (Scopus)

Abstract

A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems.

Original languageEnglish
JournalStatistics in Medicine
Volume33
Issue number18
Pages (from-to)3191–3203
Number of pages13
ISSN0277-6715
DOIs
Publication statusPublished - 15 Aug 2014

Fingerprint

Dive into the research topics of 'Calibration plots for risk prediction models in the presence of competing risks'. Together they form a unique fingerprint.

Cite this