High-order pLSA for indexing tagged images
This work presents a method for the efficient indexing of tagged images. Tagged images are a common resource of social networks and occupy a large portion of the social media stream. Their basic characteristic is the co-existence of two heterogeneous information modalities i.e. visual and tag, which refer to the same abstract meaning. This multi-modal nature of tagged images makes their efficient indexing a challenging task that apart from dealing with the heterogeneity of modalities, it needs to also exploit their complementary information capacity. Towards this objective, we propose the extension of probabilistic Latent Semantic Analysis to higher order, so as to become applicable for more than two observable variables. Then, by treating images, visual features and tags as the three observable variables of an aspect model, we learn a space of latent topics that incorporates the semantics of both visual and tag information. Our novelty is on using the cross-modal dependencies learned from a corpus of images to approximate the joint distribution of the observable variables. By penalizing the co-existence of visual content and tags that are known from experience to exhibit low dependency, we manage to filter out the effect of noisy content in the resulting latent space.