Preallocating a vertex buffer in the GPU and then filling it vertex data and then drawing them to replicate legacy opengl functions like
glNormal2fetc and drawing shapes with them.
Sending the vertex data of all the primitive shapes to the GPU at once at the start of the program then drawing the appropriate part of it in the vertex shader when drawing the shape.
These are all the ways I could think of but I’m not sure how optimal either of these approaches are.
Do games and game engines use a similar approach? or is there an even better approach to this?