I recently received an email from Christophe Delépine about precision issues in geometry clipmaps. I thought this information might be of interest to others so I’m just gonna post my reply here.
Christophe: In Hoppe’s original implementation (without vertex textures), vertex coordinates are stored in VBOs and updated toroidally as the observer moves. If i understand well, this has an important consequence: vertices must have absolute world coordinates. The problem is that you loose precision if coordinates are stored as floats instead of doubles. This makes it impossible to render very large terrains. Do you agree?
It depends upon how you implement things. When I first started writing my USGS terrain renderer I naively rendered everything in global coordinates and I experienced a huge loss in precision whenever I wanted to perform any operations on the terrain. Back in April I wrote:
Robert: I should mention that I encountered some horrible floating point error dealing with UTM coordinates. The coordinates for Corvallis are something like (470187.46959, 4927222.852384). Big numbers. The spacing between heights in the elevation data file are somewhere around 0.00087. Well I can’t just go around adding floating point numbers of such different magnitudes together and expect things to work (I tried, heh), so I got around this by translating everything to the origin and working with the elevation and geotiff data in reference to 0,0 instead of those huge UTM coordinates. Everything is still UTM, it’s just offset by a lot.
The solution to this problem was to move the origin closer to the data I was rendering and then store the data as offsets from the new origin. If you want to render a very, very large terrain I’d imagine your implementation would need to segment the terrain into smaller chunks and then store these chunks as offsets from their respective origins. Maybe a technique that combines quadtrees and geometry clipmaps?
I’m not sure what you would do near the edges of the quad patches.. Maybe your GPU clipmap code would need to be smart enough to pick which dataset to sample from and recalculate the offsets accordingly. I can imagine a shader program that would take 9 different sets of heightmaps, one for each of the possible quad patches it could hit. Maybe it could take only 4 and the CPU program would be smart enough to know the other 5 that it’s not likely to hit because it’s not near those edges.
Christophe: In Hoppe’s second implementation (GPU-based), VBOs are never updated and (x,y) coordinates remain constant. The floating point precision problem disappears but the grid is still flat and does not follow the earth curvature
I don’t know what they are off the top of my head, but there are transformations that can take a UTM coordinate (x,y and height) and put it into a spherical coordinate that is consistent with the earth. It’s probably a trivial manner to do this on the GPU.
I think you might have a floating point precision issue however. If you’re working with full UTM coordinates on the GPU then a transformation that projects them into a spherical space will probably suffer from a large loss of precision. The solution? I’m not sure. If you store the data as offsets from a moving origin then your spherical coordinate transformation will need to take that into account. I’m not sure how that would work, but it could probably be figured out.
Christophe: Another difficult problem which is somewhat related is the mapping of clipmapping texture onto the terrain geometry. Let suppose i have a huge clipmap texture that i want to map on the terrain. My current clipmapping implementation assumes that UV coordinates in the texture are proportional to latitude & longitude.
Again, if you store the terrain data as offsets from a moving origin then this makes the texture mapping problem easier. Terrain data at the origin is u,v = 0,0 and terrain data at the next origin is 1,0 or 0,1.. you get the idea. This would only work if your texture was very large, because you’re going to want to do geometry clipmapping within each of these large patches. Yeah.. things will get tricky if you have a high-resolution texture that you’re trying to map, but there’s probably a way to split things up.
Christophe: Now, if terrain is represented as a grid where points have constants x,y coordinates, then i would have to find the corresponding latitude/longitude for each point to get the correct UV coordinates. This does not seem trivial to me.
First you translate the constant x,y coordinates to world/patch space on the GPU, then you find their UV coordinates:
U = (x – xoffset) / xpatchwidth
V = (y – yoffset) / ypatchheight