Diverse Semantic Image Editing with Style Codes

Bilkent University

DivSem proposes an end-to-end framework that can encode visible and partially visible objects with a novel mechanism to achieve consistency in the style encoding.

Abstract

Semantic image editing requires inpainting pixels following a semantic map. It is a challenging task since this inpainting requires both harmony with the context and strict compliance with the semantic maps.

Majority of the previous methods that are proposed for this task try to encode the whole information from erased images. However, when an object is added to a scene such as a car, its style cannot be encoded from the context alone. On the other hand, the models that can output diverse generations struggle to output images that have seamless boundaries between the generated and unerased parts. Additionally, previous methods do not have a mechanism to encode the styles of visible and partially visible objects differently for better performance

In this work, we propose a framework that can encode visible and partially visible objects with a novel mechanism to achieve consistency in the style encoding and final generations. We extensively compare with previous conditional image generation and semantic image editing algorithms. Our extensive experiments show that our method significantly improves over the state-of-the-arts. Our method not only achieves better quantitative results but also provides diverse results.

Method


The overall pipeline for the diverse semantic image editing. During training we both encode the erased and original images. Original image is encoded because if a semantic or instance id is completely erased, then its style is extracted from the original image for the training. The outputs of the style encoding are used in normalization layers of the generator. The normalization layers additionally take semantic maps and binary masks as inputs which are removed from the figure for brevity. As for the generator, we use a multi-scale generator which refines the predictions at each stage

Results

Comparison with Other Models

DivSem ADE20k-room & ADE20k-landscape results and others

Masked

SPADE

SEAN

SESAME

SPMPGAN

Ours

Masked

SPADE

SEAN

SESAME

SPMPGAN

Ours

Masked

SPADE

SEAN

SESAME

SPMPGAN

Ours

DivSem Cityscapes results and others

Masked

SPADE

SEAN

SESAME

SPMPGAN

SIEDOB

Ours

Masked

SPADE

SEAN

SESAME

SPMPGAN

SIEDOB

Ours

Diverse Scene Editing

Panaroma Results

Demo

Acknowledgement

We also would like to thank the authors for open-sourcing the code of SPMPGAN. We built our codebase on the authors' released code. This work has been funded by The Scientific and Technological Research Council of Turkey (TUBITAK), 3501 Research Project under Grant No 121E097.