In this paper, we propose a method for reconstructing rough patterns from a texture image while retaining fine texture information using deep learning. The proposed method is based on a deep neural network called pix2pix, which can learn the conversion process between input and output images. In the previous method, which is an improvement on pix2pix, two images were input: an original texture image to be edited and an editing pattern image to be reflected in it. However, these two images alone do not accurately reproduce the color of the original texture image. Therefore, we improve the previous method and improve the reproducibility of the pattern by inputting a pattern image that links the input original texture image to the editing pattern image. The effectiveness of the improved method was verified through several experiments. Future work is to improve the quality of the generated images to improve blur and reproduce accurate input texture information.
|