#StackBounty: #neural-networks #conv-neural-network #tensorflow #invariance High resolution in style transfer

Bounty: 100

I’m investigating a bit about neural style transfer and its practical applications and I’ve encountered a major issue. Are there methods for high resolution style transfer? I mean, the original Gatys’ algorithm based on optimization is obviously capable of producing high resolution results, but it’s a slow process so it’s not valid for practical use.

What I’ve seen is that all pretrained neural style transfer models are trained with low-resolution images. For example, tensorflow example is trained with 256×256 style images and 384×384 content images. The example explains that the size of the content can be arbitrary, but if you use 720×720 images or higher, the quality drops a lot, showing only small patterns of the style massively repeated. If you upscale content and style size accordingly, the result is even worse, it vanishes. Here are some examples of what I’m explaining:

The original 384x384 result with 250x250 style size

The original 384×384 result with 250×250 style size

1080x1080 result with 250x250 style size

1080×1080 result with 250×250 style size. Notice that it just repeats a lot those small yellow circles.

1080x1080 result with 700x700 style size

1080×1080 result with 700×700 style size. Awful result.

So my question is, is there a way to train any these models with size invariance? I don’t care if I have to train the model myself, but I don’t know how to get good, fast and arbitrary results with size invariance.

Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.