Archive for the 'OpenStreetMap' Category

Published by breki on 12 Feb 2012

GroundTruth News

Alexander Ovsov has kindly translated my GroundTruth posts into Romanian. Quoting Alexander:

Our global volunteer project is called “Translation for Education”, located in all-around-the-world, with no headquarters. Its purpose is translating articles dealing with interesting science subjects such as biology, chemistry, geology, medicine, and information technologies in order to assist students and university staff who are not very good at foreign languages and help them become familiarized with relevant scientific news from abroad. Translation from English to small aborigine European (Indo-European) languages, not the big ones.

Thanks, Alexander, and keep up the good work!

Although I haven’t done any GroundTruth work for a long time, a couple of days ago I’ve started migrating (and upgrading) the source code from my local Subversion repository to Mercurial on Bitbucket. The code is in turmoil, but I hope I’ll be able to take some time from other work to clean it up and fix some bugs that have been found by users in the last year. I will also try to incorporate some new stuff that I’ve developed for Maperitive.

Published by breki on 12 Mar 2011

Maperitive: Plans For The Second Year

Creative Commons License photo credit: spitfirelas

Today Maperitive celebrates its first birthday! It has been a productive year and Maperitive has learnt a lot of new stuff.

This post is dedicated to my wishes and plans for features I want Maperitive to learn in its next year. Some of these features I planned even before I started working on the software, while others are more fresh and indicate a shift in focus. My initial goal was to concentrate more on the GUI and less on the features, but as time passed the GUI became less and less important. This is mostly due to my decision to implement scripting support, which made a lot of things easier to implement through command-line than by writing GUI code.

This doesn’t mean the GUI will be totally forgotten. It does however mean I will prioritize other things over it. My general goal is for Maperitive to become a tool for making high-quality maps with emphasis on scripting and automation.

So the following subsections describe some major things I want to see in Maperitive in the near future.

Better OSM Relations Support

This is something I’ve already started working on. The idea is to provide a better way to work with OSM relations. I’m still in the brainstorming phase on this one, but I can give a some examples of things I want users to be able to do:

  • If an OSM way is shared by two or more relations, I want the way to be treated as a single entity. Think way which represents both a country and a municipal boundary. I want only the country boundary to be shown in this case.
  • Aggregate OSM tags for two or more relations sharing the same way. Think 5 bicycle route names drawn in a single box for such a way.


I’ve started playing with IronPython as a way to allow users to specify custom code for various parts of the rendering pipeline. Some examples pop up in my mind:

  • Custom painting of the map (after the map elements have been rendered). This would allow drawing user-defined grids, labels, logos, legends etc.
  • Individual map element customization, like setting the color of the line based on value of some OSM tag. Or setting the width of the line based on how many bicycle routes cross a way.

I already did some testing with IronPython and it looks very promising.

Improved Scripting

IronPython could also be used to make the existing Maperitive scripting much more powerful. The current script language (if you can call it like that) is just too limiting – no variables, no loops, no branches etc.

I also plan to add a pure console application for headless scripting, so you won’t need to look at the GUI when running scripts.

Automatic Label Placement

This is the Holy Grail of cartography – having map labels arrange themselves in a nice looking way. I plan to devote a lot of time on this one.

Label Abbreviations

Closely related to label placement, I’ve actually implemented some parts of this feature. But before releasing it to the public, there is a lot more to be done.

Other Map Projections

It’s time to cut the ropes Mercator has tied us with. Mercator projection is nice for Web maps, but there is a lot of other interesting map projections that can be used to show OpenStreetMap data. Implementing this won’t be as simple as it sounds – Mercator projection has some pretty nice properties which make rendering on screen much easier than some other projections. And this is important for real-time renderer like Maperitive. But I guess it can be done.

Data Mining

Planet.osm is getting bigger and bigger and increasingly difficult to consume. I want to create a toolset for extracting data from large OSM files. Yes, I know there’s osmosis, but I want to have a good integration of data mining with Maperitive. And this is another point of entry for Python: writing custom filter code. Still in brainstorming phase.

Using Databases

I’ve played with spatialite and PostGIS back in 2009 and I managed to do some integration with Kosmos. I expect this to be one of the harder things to implement. I want to avoid hacky approach Mapnik uses for storing OSM data in a database – my wish is to keep the data structure in the original form.


That’s it, really. I don’t like writing long posts. I guess you’ll have to be patient and some of the things mentioned above will start to trickle in. Have a good day!

Published by breki on 25 Feb 2011

Maperitive Build 1138

A new build is here! The major new thing is support for SRTM1 and custom digital elevation models (DEMS). Well, when I say “custom”, I actually mean anything that’s compatible with SRTM *.hgt files. You can read more about this in Maperitive’s online book.

I wanted to do a little test of the new functionality so I decided to choose a small area of Alps as my testing ground. I chose Alps because there’s a good DEM source for Alps, which I wrote about some time ago: Viewfinder’s DEM so I can compare it with standard SRTM3 data.

The results are stunning. Here’s a sample hiking map of Lake Brienz using standard SRTM3 DEM (notice the “white” spots which are missing elevation data):

Brienzersee hiking map using SRTM3 DEM

And now for something completely different, a man with three buttocks:

Brienzersee hiking map using Viewfinder's Alps SRTM1 DEM

I especially like the slopes shading north of the lake.

Published by breki on 20 Jan 2011

Maperitive Build 1108

Supporting the Liberty (fries?)
Creative Commons License photo credit: Omar Eduardo

My previous post about PBF reading successes was written way too prematurely. It turned out my PBF reading code had some serious bugs which made reading look much faster than it actually was (one of the reasons was that I neglected to read OSM node keys/values when written in PBF dense node format).

I’ve subsequently written some extensive tests, comparing OSM database contents from XML and PBF file of the same area (thanks Geofabrik) on an object by object basis, so I’m now 95% sure the PBF code works OK. Performance-wise the (final?) results are much less glamorous than it looked initially: PBF reading is “only” 2.5 times faster than reading OSM.bz2 files, while in memory consumption terms, they are pretty much the same. I curious what other OSM software like osmosis has to say about these results.

I had hoped I could speed the PBF reading by spreading the work on several processor cores. What I did is to use Microsoft’s Parallel Extensions library to separate the fetching of PBF file blocks from the actual parsing of them into two (or more) cores. This resulted in only about 10% increase of the overall speed (tested on my two-core machine, so on more cores the result could be better).

It actually proved pretty hard to do a decent job of separating work in some balanced fashion. Since the file reading is sequential, this can only be done by one thread/core, so you want to put as little other work to that core as possible. As soon as file block bytes are fetched from the file, they are delegated to another core to parse it (in terms of protocol buffers) and then extract OSM objects from it. The problem is that you don’t want to enqueue too many file blocks at the same time, since this takes up valuable memory (which is already filled with extracted OSM objects). So I ended up using a blocking queue, which means the main thread (which reads the file) will wait until at least one core is available before filling the queue with another file block.

I’ve also tried micro-management strategy – using multiple cores to extract individual OSM objects, but this only really works for ways and relations. Current PBF extracts use dense nodes format, which is delta-encoded and thus forces you to read things sequentially on a single thread of execution. I guess this is the price of having a format that wants to satisfy two different (and inherently conflicting) goals: less space and less CPU.

I’m fairly new to Parallel Extensions and there are probably better ways of handling this, but I’ll leave it for the future.

Anyway, a new Maperitive release is out, grab it from the usual place.

Published by breki on 18 Jan 2011

Maperitive: Reading OSM PBF Files

UPDATE: the post below was based on premature assumptions that my new PBF code is actually working. It turns out it had a number of serious bugs which made reading look faster than it actually is. Here’s a followup post.

For the last couple of days I’ve been working on a PBF file reader for Maperitive. PBF file is a binary file for storing OSM geo data using Google’s protocol buffers.

It’s been a steep learning curve, since I had to learn three things at the same time: protocol buffers, using protobuf-net library for .NET and understanding the PBF format. I’m mostly satisfied with the protobuf-net library, although the lack of any new development activity worries me a little bit.

I’ve finished most of the PBF reading stuff this evening and I was eager to test the new code against the old XML reader. I’ve used Geofabrik’s Denmark data, here are some rough results:

  • PBF file loads 7.6 times quicker than the .OSM.bz2 file. This is a really good result, mostly thanks to the way the PBF format has been designed.
  • Loading of PBF data uses a quarter less memory than the XML file. I’m talking about the memory used in the process of loading, not for storing the loaded OSM data – the data is internally stored in the exactly same way both for PBF and XML reading. This result surprised me a bit, I guess the extra memory consumed by the XML reader is due to the XML parser itself and/or the fact that a lot more strings are generated when reading XML OSM tags. PBF uses string tables and thus saves a lot of space by reusing common strings.

Published by breki on 05 Jan 2011

Maperitive Build 1094

Maperitive Hiking Map Sample

The first Maperitive release of 2011 is out! Download link, as always:

There are many new goodies inside, including:

  • Commands for FTP uploading and zipping files.
  • Pipelining generated files from commands like generate-tile and export-bitmap to the above commands.
  • generate-tiles command now has the ability to detect whether tile contents have changed since the last run (using tile fingerprinting). This way only the actually modified tiles can be uploaded to an FTP server, saving you a lot of time and bandwidth.
  • Scripts now have the ability to reference external files using relative paths.
  • Icons can now be placed on lines and rotated in the same way shapes can.
  • I added two new keyboard shortcuts, one for focusing on the map (Ctrl+M) and the other focusing on the command prompt (Ctrl+Enter).
  • Various SVG export bug fixes and improvements. SVG paths are now generated in a more optimal fashion, reducing the size of the generated SVG file even further.
  • Various other bug fixes.

Scripting Web Maps Generation

In the last couple of weeks I’ve been working on my own hiking Web map. This is sort of me dogfooding of Maperitive. In the process I’ve fixed a number of bugs and added the above mentioned commands for automating the process of creating and maintaining Web maps. To see what I’m talking about, here’s a sample script which generates a web map using my (soon to be published) hiking rendering rules and uploads it to an FTP server:

use-ruleset location=hiking.txt
load-source Stajerska.osm.bz2
load-source Stajerska.ibf
load-image file=Stajerska-hillshading.png background=false
set-bounds 15.14,46.39,15.92,46.
generate-tiles minzoom=11 maxzoom=15 use-fprint=true
ftp-upload user=me pwd=secret remote-dir=hikingmap/tiles

That’s it! In the current version the script must be run inside the Maperitive GUI, but I plan to add a pure headless console for these kinds of tasks.

What’s Next

I have a long list of features waiting to be implemented, and the list doesn’t seem to get any shorter with time. But the main focus will be on even better automatic scripting support and improved map rendering quality.

Published by breki on 20 Nov 2010

Maperitive: New Release

Yes, it’s finally here! After more than a month of hard work and a lot of code changes, I managed to produce a new stable release (well I hope it’s stable). Just to be on the safe side I did not publish the package to the main download directory so your old Maperitive installations will not detect the new version. This means you’ll need to download it manually from the beta location. So if anyone wants to have a go, please do and report any problems.

The new version has a lot of infrastructure code changes. The biggest change is that I’ve replaced Windsor Castle with my own newly implemented Dependency Injection library. This probably not very interesting to end users, so I’ll write more about it in other posts.

As for functionality, there are a lot of improvements:

  • improved performance (I’ve done performance profiling using dotTrace)
  • fixed Illustrator SVG problems
  • XAPI URL is now configurable
  • you can now specify lflp.max-allowed-corner-angle and lflp.min-buffer-space settings which control how line labeling works (see the default rules)
  • tile generator: new min-tile-file-size parameter which allows skipping of generating empty tiles
  • better error description of invalid OSM files
  • more forgiving OSM reader
  • Maperitive should now not fail if it cannot write settings
  • export commands now export to ‘output’ directory by default.


Published by breki on 19 Nov 2010

Maperitive vs. Adobe Illustrator

Maperitive -> SVG -> Adobe Illustrator

It’s been a hard fight, but I’ve finally worked out most (all?) of Adobe Illustrator’s quirks and bugs in SVG importing and ways to go around them. I can now officially say that Adobe’s support for SVG is lousy (so much for their professed commitment to open standards). I even managed to export SVGs from Illustrator which then could not be imported back into it (“Can’t open the illustration”).

Anyway, SVGs now look pretty nice in Illustrator, but there is a price to pay: they need to generated in a different way than for Inkscape, so there is a new setting available in the export-svg command. They certainly look better than SVGs produced by the Export tab on the’s map site and they are structured in a more usable way (better layering and reuse of shapes, use of actual text lettering instead of graphic paths etc).

Here’s a sample SVG map of the Dublin’s center, so you can take a look (warning 1: do not try to open the file in a browser, it is a compressed SVG (SVGZ) file which only Illustrator and Inkscape know how to handle, warning 2: although the map file is not very large, it may take a while for Illustrator to open and show it).

Expect this feature to be available in the next Maperitive release (within days).

Published by breki on 20 Oct 2010

OpenStreetMap: What’s Wrong With The Picture

Justin O’Beirne wrote a couple of blog posts about his “outside” view on OpenStreetMap Web maps (Mapnik layer, basically). When I say “outside” view, I mean that he talks about how and what data is presented on the map from the point of a visitor to the OSM site, and not from the point of someone who is an active OSM mapper and knows the root causes of these problems.

And this is just the point: a well intentioned criticism from someone outside of the community should be received as such. I think the OSM community is more and more becoming self-centered and disregards some basic issues about why this project exists in the first place. Why should Justin (or anybody else) care what “Mapnik” or “Osmarender” means? Why should he care about the tagging mess which resulted from the anarchical way the project is (not) being led? Not everyone wants to become a mapper – most of people just want to find something on the map.

Reading through various OSM mailing lists and forums, one gets the feeling there is very little concern about how the data that is being collected by hardworking individuals will be useful in a practical way. I see two main problems here:

  • Inconsistency of how things are tagged. And the project’s inability to set some strict quality guidelines for tagging. The “everybody can tag the way she likes” slogan starts to wear off once you want to use such data for something more than just displaying it on the OSM web map.
  • High barrier to entry if you want to access the data. Sorry, but not everyone has the technical means and knowledge to import 12 GB of zipped planet.osm XML file into a database and then run queries just so he can access the latest data for his local area. OSMXAPI is great, but it’s unstable and has a limit of how much data can be retrieved. Country extracts help, but the problem is that they are country-oriented. What if I do not want my data to be cut along the border? Some time ago I suggested providing grid extracts instead of country ones – the user would choose which grid cells to download and then merge the data himself.

Anyway, enough ranting… Going back to coding.

Published by breki on 25 Sep 2010

Poor Man’s Task Tracking Tool, Revisited

Back in the days before Maperitive had been released for the first time, I wrote a post about how I use simple text files to keep the track of things I have to implement (and things already implemented).

It turns out the to-do list has grown so much that it is very difficult to decide which things to implement in which order. Some features or bugs come in the middle of implementing other features and I’ve frequently had to make use of SVN branches to be able to work things out.

So I got an idea of using Google Docs spreadsheets to create a list of tasks. But a simple list was not enough: I wanted the spreadsheet to be able to tell me which tasks should be implemented first and which can wait. I’ve added two columns to the list: priority and complexity. Then there’s a third column called score, which calculates a score based on the priority and complexity using a simple formula. The complexity is measured in “ideal hours” the task is supposed to take (a rough estimate, of course), while the priority is some value (usually an integer from 1 to 5) which denotes how important the task (or feature) is.

"to do" list using Google Docs

After entering tasks, I simply use spreadsheet’s “Sort sheet Z –> A” function to make the tasks with the highest score appear at the top of the list.

Simple, but effective.

Next »