I am interested in knowing about generative methods that generate signals (e.g. images) of varying sizes. But the size generation being sort of “smooth/continuous“. So for example, generating images of total size 3000 smoothly up to 196,608 (or even more but every number in between matters to me) with a single model.
The only (what I think is totally ridiculous) idea I had is outputting each signal value (e.g. pixel) via an RNN (since RNNs are size-independent). So that completely solves the issue but my RNN might end up being of length 3000 or 196,608 at times (or hopefully even more)…which seemed ridiculous to me. Is there any real machine learning paper (ideally Deep Learning) that solves such a task?
In fact, are there any image generation tasks with such nature? I am not familiar of any but if anyone knows that would be fantastic! Note, however, that this is not really a transfer learning problem. So have 196,608 version of the output layer seems a bit crazy for me (since I might even want more…). The only made-up task on the spot I can come up with for this is generating all size crops for imagenet data set and then trying to generate any of them. I guess that might be an instance of the task I have in mind. Is there a real/serious task like this that the Machine Learning community is working on?