Sunday 10 a.m.–1 p.m. in

Exploring generative models with Python

Javier Jorge

Description

Do you know that Neural Networks learn an inner representation? This could be useful for data understanding and disentangling relationships between features. Over this manifold, their last layers can perform the classification. But what if we change the target of these networks? Instead of trying to classify correctly, we could want to reconstruct the input. Based on this idea, there are different approaches using Neural Networks that are known as generative models, models that can provide new instances learning a manifold of the original data. Generative Adversarial Networks (GAN) and Variational Autoencoders (VAE) are two well-known techniques to do this. Regarding this learnt manifold, if the network can reconstruct the input after projecting it into this subspace, What is in between two projected samples? Is it useful or just noise? We have explored this manifold empirically and we have found that some paths between projected samples provide unseen instances that are a combination of the inputs, i.e: digits that are rotated or shrunk, style combination, etc. This could be useful to understand the underlying distribution or to provide new instances when data is scarce. In order to carry on these experiments, we have relied on Python, in particular, we have extensively used Tensorflow. In this poster, the Python tools that one could use to explore and reproduce these experiments will be described as well as the datasets that we have used.