IMPROVING STYLE TRANSFER USING DEPTH EXTRACTION AND GENERATIVE ADVERSARIAL NETWORKS

Authors

  • S. Ramya Author
  • C. Radhika Author
  • D. Aravind Gosh Author

Keywords:

generative adversarial network, style transfer, image processing, artistic design

Abstract

The Depth Extraction Generative Adversarial Network (DE-GAN) is intended for artistic style transfer. Conventional style transfer models emphasize the extraction of texture and color features from style images via an autoencoding network, combining these features through high-dimensional coding. The aesthetics of artworks encompass the color, texture, shape, and spatial characteristics of the artistic object, collectively defining the work's artistic style. This paper presents a multi-feature extractor designed to derive color features, texture features, depth features, and shape masks from style images utilizing U-net, a multi-factor extractor, fast Fourier transform, and the MiDas depth estimation network. A self-encoder architecture serves as the core of the content extraction network, facilitating the creation of a network that shares style parameters with the feature extraction network, ultimately achieving the generation of artwork images in three-dimensional artistic styles. The experimental analysis indicates that, relative to other advanced methods, images generated by DE-GAN exhibit superior subjective image quality, and the stylistic representations are more aligned with the aesthetic attributes of authentic artworks. The quantitative data analysis indicates that images produced by the DE-GAN method exhibit superior performance regarding structural features, image distortion, clarity, and texture details.

Downloads

Download data is not yet available.

Downloads

Published

07-03-2025

How to Cite

IMPROVING STYLE TRANSFER USING DEPTH EXTRACTION AND GENERATIVE ADVERSARIAL NETWORKS. (2025). International Journal of Information Technology and Computer Engineering, 13(1), 346-351. https://ijitce.org/index.php/ijitce/article/view/897