Oral presentation of ICML 2018

Image credit: Unsplash

Abstract

Generative adversarial networks (GANs) aim to generate realistic data from some prior distribution (e.g., Gaussian noises). However, such prior distribution is often independent of real data and thus may lose semantic information (e.g., geometric structure or content in images) of data. In practice, the semantic information might be represented by some latent distribution learned from data, which, however, is hard to be used for sampling in GANs. In this paper, rather than sampling from the pre-defined prior distribution, we propose a Local Coordinate Coding (LCC) based sampling method to improve GANs. We derive a generalization bound for LCC based GANs and prove that a small dimensional input is sufficient to achieve good generalization. Extensive experiments on various real-world datasets demonstrate the effectiveness of the proposed method.

Date
Jul 12, 2018 2:30 AM — 2:40 AM
Location
Stockholmsmässan, Stockholm, Sweden
Jiezhang Cao
Jiezhang Cao
Ph.D. student

I am a lucky boy.