Permutation tests for classification: Revisited

Melanie Ganz, Ender Konukoglu

1 Citation (Scopus)

Abstract

In recent years, the focus on validating the statistical methods used in the field of neuroimaging has increased. While several papers have already highlighted the importance of non-parametric methods and especially permutation testing for general linear models (GLMs), it seems like the importance of validating classification results other than through cross-validation has taken a back seat. But classification, especially binary classification, is one of the most common tools in neuroimaging. Often permutations are not performed using the argument that they are too computationally expensive, especially for trainingintensive classifier as e.g. neural networks. In the following we want to re-visit the use of permutation tests for validating cross-validation results statistically and employ recent approximate permutation methods that reduce the number of permutations that need to be performed. We evaluate the feasibility of using full as well as approximate permutation methods in the extreme cases of small and unbalanced data sets. Our results indicate the applicability of a tail and Gamma approximation to perform permutation testing for binary classification tasks.

Original languageEnglish
Title of host publication2017 International Workshop on Pattern Recognition in Neuroimaging, PRNI 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Publication date14 Jul 2017
ISBN (Print)9781538631591
DOIs
Publication statusPublished - 14 Jul 2017
Series2017 International Workshop on Pattern Recognition in Neuroimaging, PRNI 2017

Fingerprint

Dive into the research topics of 'Permutation tests for classification: Revisited'. Together they form a unique fingerprint.

Cite this