Most of our App renders dynamically created textures and meshes.
Currently we’ve modified our Raster map rendering shader to incorporate the “Height Map”, so that it can highlight various parts of the raster map because on contour lines.
The issue we have is that our raster tiles are not aligned to our height map (and can’t be). Each part of the world employs slightly different tile sizes (due to Mercator projection math).
Therefore, dynamically, we’re calling two methods repeatedly on the same “heightmap” texture.
_elevTexture.SetSize(newSize…)
_elevTexture.SetData(newdata…)
If SetSize changes the #pixels by a few pixels (e.g. from 200x200 to 199x198), what happens in GPU memory (or CPU RAM)? Does it keep the previous memory allocation for the 200x200, and simply make it work as a 199x198? (wasting the extra pixels)… Or does it discard the 200x200 texture allocation and allocate a whole new block sized at 199x198?
We hope it’s not re-allocating the whole texture in GPU RAM unless the SetSize increases the size required (above the last Maximum).
If it re-allocates all new GPU RAM each time for the resized texture (even if shrunk), then we’re going to instead create our own “larger than needed texture” to start with, and then just set up the UV coordinates to ignore the unused pixels.