The Precautionary Principle and statistical approaches to uncertainty

Niels Keiding, Esben Budtz-Jørgensen

11 Citations (Scopus)

Abstract

The central challenge from the Precautionary Principle to statistical methodology is to help delineate (preferably quantitatively) the possibility that some exposure is hazardous, even in cases where this is not established beyond reasonable doubt. The classical approach to hypothesis testing is unhelpful, because lack of significance can be due either to uninformative data or to genuine lack of effect (the Type II error problem). Its inversion, bioequivalence testing, might sometimes be a model for the Precautionary Principle in its ability to "prove the null hypothesis". Current procedures for setting safe exposure levels are essentially derived from these classical statistical ideas, and we outline how uncertainties in the exposure and response measurements affect the no observed adverse effect level, the Benchmark approach and the "Hockey Stick" model. A particular problem concerns model uncertainty: usually these procedures assume that the class of models describing dose/response is known with certainty; this assumption is, however, often violated, perhaps particularly often when epidemiological data form the source of the risk assessment, and regulatory authorities have occasionally resorted to some average based on competing models. The recent methodology of the Bayesian model averaging might be a systematic version of this, but is this an arena for the Precautionary Principle to come into play?
Original languageEnglish
JournalInternational Journal of Occupational Medicine and Environmental Health
Volume17
Issue number1
Pages (from-to)147-51
Number of pages4
ISSN1232-1087
Publication statusPublished - 2004

Fingerprint

Dive into the research topics of 'The Precautionary Principle and statistical approaches to uncertainty'. Together they form a unique fingerprint.

Cite this