I am using machine learning to create a land use regression model. My inputs are geographic coordinates. These I use to extract 80×80 meter satellite images or maps to feed the model.
Lets take the example of a neural network:
Existence: The solution will always exist for any finite input. It may overflow if the input and/or weights are unreasonable large, but the solution exists in principle.
Unique: The solution will be unique by definition. Of course, the training data may have non-unique outputs for their inputs due to measurement noise, but the resulting model will only produce one output given an input.
Continuity: Hadamards last criterion I am uncertain how to adjust to land use regression. While neural networks are function compositions of continuous functions and thus continuous themselves, the input space is not; The border between various object, like houses and streets, form discontinuities.
Sure, if I deform what’s inside the image continuously, there is no issue, but how can I allow, lets say, a new street continuously enter the image? If I start scanning over a street, and I do it as a function of time, there will be a time $t_n$ in which the image $I(t_n)$ does not contain the street but after which $I(t_n +delta)$ will include the street for all $delta>0$ (as long as $delta$ is not so large that the street has left on the opposite side of the image). This is a discontinuity that would lead to discontinuous output as well, and thus contradict Hadamards final property.
What is the correct way to think about this if I want it to be well-posed? I mean, I guess I can consider my inputs as a local statistic of sorts while my actual input is the entire globe… But, how would one go about making land-use regression well posed? Or for that sake, any problem where the input is a rectangular subset of Earth as seen from space.