Pigeon Sim goes to LonCon3

The pigeon sim visited the LonCon3 72nd World Science Fiction convention at the ExCeL centre last week. The event covered Thursday 14th to Monday 18th, with the weekend looking like the best part as lots of people were dressed up as science fiction characters. On the Friday shift we did get Thor on the pigeon sim, along with batman and spiderman though.

steve-IMG_20140813_160435

 

The picture above shows the build after Steve and Stephan had finished putting it all together on the Wednesday afternoon.

IMG_20140815_131901

IMG_20140815_140311

 

The white tents show the “village” area, while we were in the exhibitors’ area which is the elevated part at the top of the pink steps on the right. Our pigeon sim exhibit was about two rows behind the steps on the elevated level.

I’m slightly disappointed that Darwin’s pigeons (real live ones) weren’t arriving until Saturday, so I missed them, but the felt ones were very good:

IMG_20140815_131450

 

And finally, you can’t have a science fiction event without a Millennium Falcon made out of Lego:

IMG_20140815_140611

Made by Craig Stevens, it even folds in half so you can see inside: http://www.loncon3.org/exhibits.php#70 and http://modelmultiverse.com

Tiling the Blue Marble

Following on from my last post about level of detail and Earth spheroids, here is the NASA Blue Marble texture applied to the Earth:

The top level texture is the older composite Blue Marble which shows blue oceans and a very well-defined North Pole ice sheet. Once the view zooms in, all subsequent levels of detail show the next generation Blue Marble from January 2004 with topography [link]. Incidentally, the numbers on the texture are the tile numbers in the format: Z_X_Y, where Z is the zoom level, so the higher the number, the more detailed the texture map. The green lines show the individual segments and textures used to make up the spheroid.

In order to create this, I’ve used the eight Blue Marble tiles which are each 21,600 pixels square, resulting in a full resolution texture which is 86,400 x 43,200 pixels. Rather than try and handle this all in one go, I’ve added the concept of “super-tiles” to my Java tiling program. The eight 21,600 pixel Blue Marble squares are the “super-tiles”, which themselves get tiled into a larger number of 1024 pixel quad tree squares which are used for the Earth textures. The Java class that I wrote to do this can be viewed here: ImageTiler.java. As you can probably see from the GitHub link, this is part of a bigger project which I was originally using to condition 3D building geometry for loading into the globe system. You can probably guess from this what the chunked LOD algorithms are going to be used for next?

Finally, one thing that has occurred to me is that tiling is a fundamental algorithm. Whether it’s cutting a huge texture into bits and wrapping it around a spheroid, or projecting 2D maps onto flat planes to make zoomable maps, the necessity to reduce detail to a manageable level is essential. Even the 3D content isn’t immune from tiling as we end up cutting geometry into chunks and using quad tree or oct tree algorithms. Part of the reason for this rests with the new graphics cards, which mean that progressive mesh algorithms like ROAM (Duchaineau et al) are no longer effective. Old progressive mesh algorithms would use CPU cycles to optimise a mesh before passing it on to the graphics card. The situation now with modern GPUs is that using a lot of CPU cycles to make a small improvement to a mesh before sending it to a powerful graphics card doesn’t result in a significant speed up. Chunked LOD works better, with blocks of geometry being loaded in and out of GPU memory as required. Add to this the fact that we’re working with geographic data and spatial indexing systems all the time and solutions to the level of detail problem start to present themselves.

 

Links:

NASA Blue Marble: http://visibleearth.nasa.gov/view_cat.php?categoryID=1484

Image Tiler: https://github.com/maptube/FortyTwoGeometry/blob/master/src/fortytwogeometry/ImageTiler.java

ROAM paper: http://dl.acm.org/citation.cfm?id=267028 (and: http://www.cognigraph.com/ROAM_homepage/ )

Building Virtual Worlds

The algorithms required to build virtual worlds like Google Earth and World Wind are really fascinating, but building a virtual world containing real-time city data is something that hasn’t yet been fully explored. Following on from the Smart Cities presentation in Oxford two weeks ago, I’ve taken the agent-based London Underground simulation and made some improvements to the graphics. While I’ve seen systems like Three.js, Unity and Ogre used for some very impressive 3D visualisations, what I wanted to do required a lower level API which allowed me to make some further optimisations.

Here is the London Underground simulation from the Oxford presentation:

The Oxford NCRM presentation showed the Earth tiled with a single resolution NASA Blue Marble texture, which was apparent as the view zoomed in to London to show the tube network and the screen space resolution of the texture map decreased.

The Earth texture and shape needs some additional work, which is where “level of detail” (LOD) comes in. The key point here is that most of the work is done by the shader using chunked LOD. If the Earth is represented as a spheroid using, for example 40 width segments and 40 height slices, then recursively divided using either quadtree or octtree segmentation, we can draw a more detailed Earth model as the user zooms in. By using the same number of points for each sub-mesh, only a single index buffer is needed for all LODs and no texture coordinates or normals are used. The shader uses the geodetic latitude and longitude calculated from the Cartesian coordinates passed for rendering, along with the patch min and max coordinates, to get the texture coordinates for every texture tile.

GeoGL_Smartie          GeoGL_WireframeSmartie

The two images above show the Earth using the the NASA Blue Marble texture. The semi-major axis has been increased by 50%, which gives the “smartie” effect and serves to show the oblateness around the equator. The main reason for doing this was to get the coordinate systems and polygon winding around the poles correct.

In order for the level of detail to work, a screen space error tolerance constant (labelled Tau) is defined. The rendering of the tiled earth now works by starting at the top level and calculating a screen space error based on the space that the patch occupies on the screen. If this is greater than Tau, then the patch is split into its higher resolution children, which are then similarly tested for screen space error recursively. Once the screen space error is within the tolerance, Tau, then the patch is rendered.

GeoGL_Earth    GeoGL_OctTreeEarth

The two images above show a correct rendering of the Earth, along with the underlying mesh. The wireframe shows a triangular patch on the Earth at the closest point to the viewer which is double the resolution (highlighted with red line). Octtree segmentation has been used for the LODs.

The code has been made as flexible as possible, allowing all the screen error tolerances, mesh slicing and quad/oct tree tiling to be configured to allow for as much experimentation as possible.

The interesting thing about writing a 3D system like this is that it shows that tiling is a fundamental operation in both 2D and 3D maps. In web-based mapping, 2D, maps are cut into tiles, which are usually 256 pixels square, to avoid having to load massive images onto the web browser. In 3D, the texture sizes might be bigger, but, bearing in mind that Google are reported to be storing around 70TB of texture data for the Earth, there is still the issue of out of core rendering. For the massive terrain rendering systems, management of data being moved between GPU buffers, main memory, local disk and the Internet is the key to performance. My feeling was that I would let Google do the massive terrain rendering and high resolution textures and just concentrate on building programmable worlds that allow exploration, simulation and experimentation with real data.

Finally, here’s a quick look at something new:

VirtualCity

The tiled Earth mesh uses procedural generation to create the levels of detail, so, extending this idea to procedural cities, we can follow the “CGA Shape” methodology outlined in the “Procedural Modeling of Buildings” paper to create our own virtual cities.

 

Useful Links:

http://tulrich.com/geekstuff/chunklod.html

http://tulrich.com/geekstuff/sig-notes.pdf 

http://peterwonka.net/Publications/pdfs/2006.SG.Mueller.ProceduralModelingOfBuildings.final.pdf