Thursday, May 24, 2012

Bounds vs Points in Spatial Indexes

Hmmm Spatial Indices.

In GIS land I’ve been working recently with finding intersecting circles.

Most ‘fixes’ from phones and things are given as longitude, latitude and radius.  The phone knows only it is within some distance of some point (as in, how far from a fixed cell-phone tower). 

Sometimes you get approximate height above sea level too, but I think that’s derived from the fixed cell-phone tower and some averaging.  But its not so interesting luckily.

The largest real radius I’ve seen is approaching 30 miles!

A shallow look at your average GIS database and its modeling points not these circles.

Two circles (or spheres) overlap if the distance between their centres is less than the sum of their radii.

In PostgreSQL, to find all circles overlapping some circle A would be something like:

WHERE ST_DISTANCE('POINT(a_longitude,a_latitude)'::geography,row_centre) < row_radius + a_radius

(I haven’t tested that snippet)

Now, making the shocking guess that PostgreSQL’s spatial index is storing points, that looks a lot like a full table scan.  Even if the row_radius is indexed so its maximum value is known, that’d still be scary.

I relived a past chain of thought when I was making hobby spatial indices for my hobby gaming experiments…

Its normal to create game octrees and such storing the points of units and objects.  And then, when searching for all objects near some point, to add a fudge of the largest unit or object size.  This is because the space indexed is much larger than the known maximum unit size.

However, instead I stored bounds in my octree.  An object is stored in the smallest node that can contain it.

There are various recipes for dealing with fast-moving objects or objects that span two or more cells. I just put these objects into the node that contains them, even if that node has children. If they are fast-moving I compute a reasonably large bounds e.g. position of the next n frames, and store that, so I don’t have to wriggle them each update.

When the camera moves, I then compute the intersection of the viewing frustum on the tree. I put the intersecting items in a list.  This visible list is managed by the scene-graph so is only recalculated when the camera moves - moving objects in the graph are moved in this list and using tombstones and unsorted change-lists that get sorted and merged back in as-needed.  I note in a bit-flag if the node or which of its children intersect the frustum, so later when objects are moving and being moved in the tree I can avoid unnecessary frustum checks.

Then I sort them by shader program and front to back.

To draw, I iterate through the list drawing the opaque parts of the models. Those models that have semi-transparent parts I build a single-linked-list of, so I can then go a second pass quickly back through those items back-to-front drawing the semi-transparent parts.

As the items move, I have the bit-fields so I know if the node in the tree they are in is visible or not, so I know if I have to insert, remove or resort them in the visible array that the tree is maintaining. If I have to sort them, rather than doing the sort there and then, I just put a tombstone in their old position in the list and put them in a new ‘dirty’ list; its then only to sort those things that moved and then merge that into a new visible list in O(n). Those types of tricks gained real frames for me.

My implementation was a bit sloppy; you can infer all the intersection tests for children from testing against the centre point and so on but my simpler-to-follow redundant aabb intersection testing never glowed in profiling.


 ↓ click the "share" button below!