Tools for the understanding of spatio-temporal climate scenarios in local planning: Kimberley (BC) case study

Major adaptation and mitigation
Major adaptation and mitigation

Funded by the Swiss National Science Foundation SNSF, and in collaboration with the Collaborative for Advanced Landscape Planning (CALP), particularly Ellen Pond, the City of Kimberley, and the Columbia Basin Trust (CBT), I analysed the benefits and limitations of interactive virtual globes for stakeholder engagement in climate related scenario planning over the last 12 months. The results have now been published as SNSF report and can be downloaded here.

Natural Earth public domain map dataset

Natural Earth is a public domain map dataset available at 1:10m, 1:50m, and 1:110m scales. “Featuring tightly integrated vector and raster data, with Natural Earth you can make a variety of visually pleasing, well-crafted maps with cartography or GIS software.”

Natural Earth Vector comes in ESRI shapefile format, Natural Earth Raster comes in TIFF format with a TFW world file, all Natural Earth data use the Geographic projection, WGS84 datum.

Unfortunately, only zip-File Download and no OGC Web Feature or Web Raster service is provided.

Bing Maps incorporates Photosynth models and is aiming at semantics in the long-term

Microsoft Photosynth is a photogrammetric software that creates 3d buildings from multiple photos, e.g. shot by random tourists. Our colleagues from the urbandigital blog are very much in favour of Photosynth and see great potential in it for urban visualization or as a kind of 3d scanner. Now, Microsoft has taken the logic next step and integrated Photosynth with Bing 3D. It may be criticized that the Microsoft approach requires Silverlight which is still not standard. However, Bing users can now create buildings automatically from photos whereas Google Earth users are modeling their content in Sketchup. It will be very interesting to test both approaches in comparison and to see which one will finally find more users.

Another interesting approach by Microsoft is mentioned by Chris Dannen in the Fast Company blog: In the long-term, Microsoft wants to extract semantic information automatically from the user-generated photos. In this point, Microsoft meets latest research in photogrammetry, e.g. in the “Nachwuchsgruppe der Volkswagen Stiftung” in cartography at the University of Hanover, where the automatic extraction of facades from photos is researched.

With regard to landscapes, vegetation is still not an issue – neither for Google nor for Microsoft. How about the automatic extraction of vegetation information from photos? There is a lot research about the recognition of vegetation in orthophotos – how about linking this to the automatic population of virtual landscapes with realistic plants?