Which would be better, a statistical method that yields false positive errors on 5% of occasions and false negative on 20%, or a statistical method that yields 10% false positives but only 5% false negatives?
The answer, of course, has to begin with “It depends…”. But what factors is it dependent upon, and how are those factors taken into account in real world application of hypothesis testing?
I would say that it depends upon some kind of utility function that takes into account circumstances and consequences of decisions, but I have never noticed any specification or discussion of such a function in research papers in my area of basic pharmacology, and I assume that they are similarly absent from research papers from many other areas of science. Does that matter?
It would be safe to assume that researchers are responsible for the experimental design and analysis in most of the research papers that I read, but at least sometimes a statistician will be consulted (usually after the data are in hand). Do statisticians discuss loss functions with researchers before advising on or performing a data analysis, or do they just use one that is unconsidered and implicit?