Semantic Image to Image Translation
Existing methods have shown great success in generating images with diverse attributes. But can we generate images with specific attributes?
We propose a novel Semantic Generative Adversarial Network to generate images with attributes specified explicitly. The inputs are an edge image along with sentences, where the sentences serve as the guidance for generator. The output is a synthetic image with specific attributes described by the sentences. Ou approach is to concatenate the desired attribute embedding with image features learned in the generator and apply a discriminator to distinguish whether the generated image is real or fake.
This method is tested on Pix2pix and achieved lower Mean Average Error than Pix2pix. We further demonstrated that this simple yet effective approach could manipulate the specific attributes of generated images. Our code is available at https://github.com/e-271/semantic_image_translation