Describing Images using Inferred Visual Dependency Representations.

Desmond Elliott, Arjen de Vries

    29 Citationer (Scopus)

    Abstract

    The Visual Dependency Representation (VDR) is an explicit model of the spatial relationships between objects in an image. In this paper we present an approach to training a VDR Parsing Model without the extensive human supervision used in previous work. Our approach is to find the objects mentioned in a given description using a state-of-The-Art object detector, and to use successful detections to produce training data. The description of an unseen image is produced by first predicting its VDR over automatically detected objects, and then generating the text with a template-based generation model using the predicted VDR. The performance of our approach is comparable to a state-ofthe-Art multimodal deep neural network in images depicting actions.

    OriginalsprogUdefineret/Ukendt
    TitelProceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
    Antal sider11
    Publikationsdato2015
    Sider42-52
    StatusUdgivet - 2015

    Citationsformater