In machine learning, there are two commonly used plots to identify overfitting.
One is the learning curve, which plots the training + test error (y-axis) over the training set size (x-axis).
The other is the training (loss/error) curve, which plots the training + test error (y-axis) over the number of iterations/epochs of one model (x-axis).
Why do we need both curves? Specifically, what does a learning curve tell us over a training curve? (If we just want to detect if a model overfits, the training curve seems much more efficient to plot.)