VBNets: Entity Representation Learning via Variational Bayesian Networks
Entity representation learning is an active research field. In the last decade, both the NLP and recommender systems communities introduced a plethora of methods for mapping words, items and users to vectors in a latent space. The vast majority of these methods utilize implicit relations (e.g. co-occurrences of words in text, co-consumption of items by users, etc.) for learning the latent entity vectors. However, entities (words, items, etc.) exhibit a power law (long tail) distribution, in which most entities appear a small number of times and hence are more vulnerable to receive poor embedding due to insufficient statistics. Yet, often, additional side information in the form of explicit (e.g. hierarchical, semantic, syntactic) relations can be leveraged for learning finer embeddings.
In this talk, we present Variational Bayesian Networks (VBNets) - A novel scalable Bayesian hierarchical model that utilizes both implicit and explicit relations for learning entity representations. Different from point estimate solutions that map entities to vectors and are over confident, VBNets map entities to densities and hence model uncertainty. VBNets are based on analytical approximations of the intractable entities' posterior and the posterior predictive distribution. We demonstrate the effectiveness of VBNets on linguistic, recommendations, and medical informatics tasks. Our findings show that VBNets outperform other alternative methods that facilitate Bayesian modeling with or without semantic priors. Nevertheless, we show that VBNets produce superior representations for long tail entities such as rare words and cold items.
If time permits, I will give a brief overview of several other deep learning works in the domains of neural attention mechanisms, multiview representation learning and inverse problems with applications for natural language understanding, recommender systems, computer vision, sound synthesis and computed tomography.