I am trying to use Nominatim's setup tool to populate an OSM database. On Postgres 9.0/PostGIS 1.5 this works fine. I am now attempting to perform this load on Postgres 9.2/PostGIS 2.0. I needed to modify some of the plpgsql functions for compatibility regarding hstore. Once this was done, the load starts fine but after some time Postgres runs out of memory. I tried reducing the memory usage settings in Postgres, and also tried running Nominatim with limited settings, such as disabling token precalc and limiting the number of threads to one, but Postgres will still eventually run out of memory and the load fails. Has anyone had success loading OSM into Postgres 9.2 or is this not recommended? |
Postgres has many settings for memory usage that you may want to look at. It also respects the settings pretty well, including shared_mem and effective_cache. These should be set to use most of your memory if the server is dedicated, as they are a shared ressource. One setting you might actually want to lower is work_mem IF you have many concurrent connections that use the maximum work_mem (each connection can individually use up to work_mem bytes of memory in addition to the shared memory). But before changing values with a wild guess, you should try getting more info about where the memory is going. How many postgres processes are running ? How many use "too much" memory ? What queries are they running ? The first step to gather this info is using {a,h,}top, the pg_stats_activity table, and the postgres logs. Configure the logs to be more verbose if you didn't already. If you still cannot find the culprit, you may be better served by the postgres mailing lists, than help.osm.org. answered 01 Oct '12, 19:38 Vincent de P... ♦ |
I discovered there were too many CachedPlans being created by Postgres. These accumulate and eventually Postgres runs out of memory and shuts the connections down and the import fails. I posted on the Postgresql forums but the only response was that Postgres 9.2 uses more aggressive caching to help improve performance, so I am guessing it is related to that. For now I have reverted to Postgres 9.0. answered 04 Oct '12, 18:17 bryanck 1
I had one more report of somebody running into memory problems with postgres (although version 9.1) during the load stage. We never could quite figure out why it failed on this specific installation. Eventually, the patch contained in this pull request managed to work around the problem somehow. It is within the realms of possibility that your postgres 9.2 installation suffers from the same problem.
(05 Oct '12, 08:46)
lonvia
|
Some clarifications would be helpful: How much RAM do you have? Do you try to load the planet or an extract? Is it indeed postgres that runs out of memory or is it osm2pgsql?
I have 24 Gb of RAM, 12 core (24 w/ HT), Cent OS 6.3 64-bit with all updates. I'm loading a United States extract, which loads perfectly fine with Postgres 9.0. It is Postgres that runs out of memory, not Nominatim or osm2pgsql, many CachedPlans are accumulating and not being cleaned up. I have tried lowering the Postgres memory settings significantly (shared mem, effective cache, etc no more than 1 gb) but it didn't fix the problem. Also my higher memory settings work fine on Postgres 9.0. I was told Postgres 9.2 has more aggressive caching which is what could be at the root of this.