Question: I have fitted a probabilistic model (bayesian network) for modeling a binary outcome variable. I would like to create a high-resolution calibration plot (e.g. spline) corrected for overfitting with bootstrapping. Is there a standard procedure for calculating such a curve?
Considerations: I could do this easily with train/test splitting, but I would rather not throw away any data as I have less than 20,000 samples. So I naturally thought about bootstrapping. I know that one such function (calibrate) is implemented in Frank Harrell’s rms package, but unfortunately the model I use is not supported by the package.
Bonus question: is it possible to recalibrate a miscalibrated model with bootstrapping? The reason I ask this is that I tried to recalibrate a model by
- split data in train/test
- fitting model to train set
- recalibrate model to train set (with a cubic spline)
- evaluate calibration on test set
The models recalibrated in the fashion above were perfectly calibrated on the train set but not so much on the test set, which probably indicates mild overfitting. I also tried further splitting the test set, calibrating on one split and evaluating the calibration on the second split. I got better results (still not perfectly calibrated though), but the sets became quite small (~1000 samples) and thus the calibration unreliable