Training big random forests with little resources

    2 Citationer (Scopus)

    Abstract

    Without access to large compute clusters, building random forests on large datasets is still a challenging problem. This is, in particular, the case if fully-grown trees are desired. We propose a simple yet effective framework that allows to efficiently construct ensembles of huge trees for hundreds of millions or even billions of training instances using a cheap desktop computer with commodity hardware. The basic idea is to consider a multi-level construction scheme, which builds top trees for small random subsets of the available data and which subsequently distributes all training instances to the top trees' leaves for further processing. While being conceptually simple, the overall efficiency crucially depends on the particular implementation of the different phases. The practical merits of our approach are demonstrated using dense datasets with hundreds of millions of training instances.

    OriginalsprogEngelsk
    TitelKDD 2018 - Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
    ForlagACM Association for Computing Machinery
    Publikationsdato2018
    Sider1445-1454
    ISBN (Trykt)9781450355520
    DOI
    StatusUdgivet - 2018
    Begivenhed24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2018 - London, Storbritannien
    Varighed: 19 aug. 201823 aug. 2018

    Konference

    Konference24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2018
    Land/OmrådeStorbritannien
    ByLondon
    Periode19/08/201823/08/2018
    SponsorACM SIGKDD, ACM SIGMOD

    Fingeraftryk

    Dyk ned i forskningsemnerne om 'Training big random forests with little resources'. Sammen danner de et unikt fingeraftryk.

    Citationsformater