Historically, what has happened is the following:
- There is a “mechanical” structure, most importantly, arithmetic, which operates according to a set of well-defined rules that a stupid computer can easily follow. e.g. a computer can easily calculate $5349cdot 5345=…$
- There is a “mechanical” proof-system, most importantly, first order logic, which also operates according to well-defined rules that a stupid computer can follow. e.g. a computer can easily apply derivation rules to go from $forall x phi(x)$ to $exists x phi(x)$, etc.
Then there is a non-mechanical “creative” human, who uses his badly-understood “natural insight” to formulate a set of “axioms about arithmetic”, which are words in (2)’s language “about” the operations of (1)’s arithmetic computations. He also formulates a set of “logical axioms”, the rules by which the mechanical proof system should operate. He chooses these rules because his intuitive insight says that they are “obviously correct”.
It seems to me that this third element in the chain (though it happens first chronologically), is always done in an ad-hoc way. But the fact that this has always been done this way so far, doesn’t mean that it necessarily has to be this way.
My question is: Has there been an analysis of the creation of axioms as a computational problem? That is, of the problem of choosing axioms about some mathematical structure, for use in a logic-language?
This computational problem is essentially: given e.g. the rules of arithmetic, find a first-order statement that we can use as an axiom to then derive further properties.
ps. I know that this is a vague question. I am simply asking whether someone has analysed this problem in some way.