Detail View

Content-Adaptive Style Transfer: A Training-Free Approach with VQ Autoencoders
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

Title
Content-Adaptive Style Transfer: A Training-Free Approach with VQ Autoencoders
Issued Date
2024-12-10
Citation
Gim, Jongmin. (2024-12-10). Content-Adaptive Style Transfer: A Training-Free Approach with VQ Autoencoders. Asian Conference on Computer Vision, 187–204. doi: 10.1007/978-981-96-0917-8_11
Type
Conference Paper
ISBN
9789819609178
ISSN
0302-9743
Abstract
We introduce Content-Adaptive Style Transfer (CAST), a novel training-free approach for arbitrary style transfer that enhances visual fidelity using vector quantized-based pretrained autoencoder. Our method systematically applies coherent stylization to corresponding content regions. It starts by capturing the global structure of images through vector quantization, then refines local details using our style-injected decoder. CAST consists of three main components: a content-consistent style injection module, which tailors stylization to unique image regions; an adaptive style refinement module, which fine-tunes stylization intensity; and a content refinement module, which ensures content integrity through interpolation and feature distribution maintenance. Experimental results indicate that CAST outperforms existing generative-based and traditional style transfer models in both quantitative and qualitative measures. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
URI
http://hdl.handle.net/20.500.11750/57876
DOI
10.1007/978-981-96-0917-8_11
Publisher
Asian Federation of Computer Vision
Show Full Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Related Researcher

임성훈
Im, Sunghoon임성훈

Department of Electrical Engineering and Computer Science

read more

Total Views & Downloads