i'm planning the my opengl project base code, and the first phase is primarily 2D based.
what i plan to do is have a 'decoupled' base code (call it an engine if you like) that has 2 parts.
1) this part deals with calculating/updating the effects
2) this part deals with sending the effect to the gl library, and handles any updates
to begin with lets say the effect is a huge 2d plane/chessboard, i want the screen to be able to go from being at pixel level to zooming in and rotating.
after that i'll want to dynamically be able to change the colour of each square (say to bounce a different coloured square around the grid) or do some colour cycling effects - all kinds of stuff. basically this matrix is the playground of a certain class of my effects.
from what i've read i should deal with triangles.. so for each chessboard square draw the two triangles
i understand the whole keeping lists in faster gpu memory thing and with a bit of work i can work out which 'squares' have changed so i can adjust list that has already been sent to the card.
but this is where i'm confused.
i also read i should be using vertex arrays and not vertex buffer objects as they're deprecated. then i'll find something that says the opposite - this is confusing me as everytime i look up vbo's they're positively suggested over vao's.
i think part of this is due to them not being deprecated in opengl es? (which in turn i think is because in SoC type devices the ram for cpu/gpu is partitioned, so there's not much point differentiating)
so to be bang up to date and not using stuff thats deprecated. what should i use?
as an additional gotcha.. i'd hope to maintain compatibility with opengl es 2.0.
although i'm going to be working under opengl on windows, i hope to down the line recompile without too much effort on arm based devices like the raspberry pi and tablets using that kind of SoC with open gl es 2.0 on board.