Practical hash functions for similarity estimation and dimensionality reduction

Søren Dahlgaard, Mathias Bæk Tejs Knudsen, Mikkel Thorup

10 Citations (Scopus)

Abstract

Hashing is a basic tool for dimensionality reduction employed in several aspects of machine learning. However, the perfomance analysis is often carried out under the abstract assumption that a truly random unit cost hash function is used, without concern for which concrete hash function is employed. The concrete hash function may work fine on sufficiently random input. The question is if they can be trusted in the real world where they may be faced with more structured input. In this paper we focus on two prominent applications of hashing, namely similarity estimation with the one permutation hashing (OPH) scheme of Li et al. [NIPS'12] and feature hashing (FH) of Weinberger et al. [ICML'09], both of which have found numerous applications, i.e. in approximate near-neighbour search with LSH and large-scale classification with SVM. We consider the recent mixed tabulation hash function of Dahlgaard et al. [FOCS'15] which was proved theoretically to perform like a truly random hash function in many applications, including the above OPH. Here we first show improved concentration bounds for FH with truly random hashing and then argue that mixed tabulation performs similar when the input vectors are not too dense. Our main contribution, however, is an experimental comparison of different hashing schemes when used inside FH, OPH, and LSH. We find that mixed tabulation hashing is almost as fast as the classic multiply-modprime scheme (ax + b) mod p. Mutiply-mod-prime is guaranteed to work well on sufficiently random data, but here we demonstrate that in the above applications, it can lead to bias and poor concentration on both real-world and synthetic data. We also compare with the very popular MurmurHash3, which has no proven guarantees. Mixed tabulation and MurmurHash3 both perform similar to truly random hashing in our experiments. However, mixed tabulation was 40% faster than MurmurHash3, and it has the proven guarantee of good performance (like fully random) on all possible input making it more reliable.

Original languageEnglish
Title of host publicationNeural Information Processing Systems 2017
EditorsI. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. Garnett
Number of pages11
PublisherNIPS Proceedings
Publication date2017
Publication statusPublished - 2017
Event31st Annual Conference on Neural Information Processing Systems - Long Beach, United States
Duration: 4 Dec 20179 Dec 2017
Conference number: 31

Conference

Conference31st Annual Conference on Neural Information Processing Systems
Number31
Country/TerritoryUnited States
CityLong Beach
Period04/12/201709/12/2017
SeriesAdvances in Neural Information Processing Systems
Volume30
ISSN1049-5258

Fingerprint

Dive into the research topics of 'Practical hash functions for similarity estimation and dimensionality reduction'. Together they form a unique fingerprint.

Cite this