Abstract
The goal of semi-supervised learning is to improve supervised classifiers by using additional unlabeled training examples. In this work we study a simple self-learning approach to semi-supervised learning applied to the least squares classifier. We show that a soft-label and a hard-label variant of self-learning can be derived by applying block coordinate descent to two related but slightly different objective functions. The resulting soft-label approach is related to an idea about dealing with missing data that dates back to the 1930s. We show that the soft-label variant typically outperforms the hard-label variant on benchmark datasets and partially explain this behaviour by studying the relative difficulty of finding good local minima for the corresponding objective functions.
Originalsprog | Engelsk |
---|---|
Titel | 23rd International Conference on Pattern Recognition, ICPR 2016 |
Antal sider | 6 |
Forlag | IEEE |
Publikationsdato | 1 jan. 2016 |
Sider | 1677-1682 |
ISBN (Elektronisk) | 978-1-5090-4847-2 |
DOI | |
Status | Udgivet - 1 jan. 2016 |
Begivenhed | 23rd International Conference on Pattern Recognition - Cancun, Mexico Varighed: 4 dec. 2016 → 8 dec. 2016 Konferencens nummer: 23 |
Konference
Konference | 23rd International Conference on Pattern Recognition |
---|---|
Nummer | 23 |
Land/Område | Mexico |
By | Cancun |
Periode | 04/12/2016 → 08/12/2016 |