I have successfully imported europe-latest.osm.pbf in postgresql 10. Now I am using generate_tiles_multiprocess.py (NUM_THREADS=10) to prerender tiles. I have prerendered europe zoom 0-7. That went fine (but that are not that many tiles). Now I am trying to prerender individual countries (zoom 8-17) (Netherlands for example) but even such a small country takes very long. I have it running now for 6 hours and it has generated 6000 tiles which comes down to 3 seconds per tile (30s per tile per core). At this rate Netherlands will take weeks to finish. The specs of my machine are: i9-7900X CPU @ 3.30GHz 3.31 GHz (10 cores) 64 GB RAM 2 TB SSD I have previously rendered a single province of the Netherlands on my 8 year old laptop (with that province being the only data in my postgresql db)(NUM_THREADS=4) and that went fine (effective 1 tile per 0.25s - 1s per tile per core). This leaves me to believe that postgresql is my bottleneck as my current machine is much faster but it has a lot more data to query. I found a lot of information about how to tune postgresql when importing osm data, but little to nothing about best settings for rendering. Are there any settings that make a world of difference? I have the following postgresql settings:
What other settings are relevant? Which of these should I alter? I also found some information about tuning the stylesheet queries but I am using the commonly used style openstreetmap-carto and was hoping that a greater mind than me had already optimized it. Or are there any specific db indexes that I should create for this particular stylesheet that would speed things up? When I look at top I see 16 postgres processes (not 10) that use between 45% en 65% cpu and use between 0,1% and 0,3% mem. It looks like it is not using much memory: free -m:
Mem: 49739 6821 1836 528 41082 41791 Swap: 0 0 0 Does anyone know what my bottleneck is? Last and I hope it is not too relevant. I am running Docker containers in VirtualBox (gave it 50GB RAM). p.s. I read https://help.openstreetmap.org/questions/65115/solved-slow-pre-rendering-with-generate_tiles_multiprocesspy but I already imported without the -l option. p.p.s. Here someone also prerenders europe: https://help.openstreetmap.org/questions/54560/slow-generation-of-tiles-for-whole-europe and he claims to generate half a million tiles over a night. That is 100 times faster than what I am seeing. This leaves me to believe that with better tuning I can gain a lot. asked 13 Sep '19, 16:55 mg218 |
3 seconds a tile for zooms 8-12 sound roughly what I'd expect. These involve huge numbers of not very selective SQL queries and therefore take a long time to run. Tiles for lower tiles should be a little faster as the queries will be more selective, but they produce more data. For these reasons most sites only pre-render low-zoom tiles periodically and the higher zoom tiles (metatiles) are only rendered on demand. The latter case is because: a) doing them all takes a lot of time; b) it's a lot of tiles; and c) most of them will never be looked at. If you log SQL queries on your postgres instance you may be able to see exactly how many queries are run per tile (or do a single tile render as per switch2osm). You can also grab the SQL and see if the actual query plan is not what you'd expect, but consider that individual queries are retrieving the highway network, railway network, water features, natural & landuse, airports, place names, boundaries, some POIs etc. For low zoom metatiles covering much of NL this may mean table scans are used rather than indexes. answered 13 Sep '19, 17:10 SK53 ♦ Effectively 1 tile is generated per 3 seconds, but it also means that 10 cores are busy and each takes 30s to generate 1 tile. Then it doesn't sound so fast anymore. Maybe that is a better way to look at it?
(13 Sep '19, 17:28)
mg218
Here someone also prerenders europe: https://help.openstreetmap.org/questions/54560/slow-generation-of-tiles-for-whole-europe and he claims to generate half a million tiles over a night. That is 100 times faster than what I am seeing. This leaves me to believe that with better tuning I can gain a lot.
(15 Sep '19, 10:09)
mg218
I cant do your PostgreSQL DB optimisation for you. You need to do some standard DBA work. Get some query plans, see how much memory is allocated to each process (default is 4MB IIRC), look at other things like checkpoint interval etc.
(15 Sep '19, 15:31)
SK53 ♦
I changed the 4 MB work_mem to 64 MB (did not see an improvement). I was hoping that as more people are tiling europe with the same openstreetmap-carto stylesheet, with data imported with osm2pgsql and a CPU with 10+ cores (all more or less standard) that good settings for postgresql would be well known and hopefully documented. But I find tons of info about good settings for importing but not so for tiling. And yes, I have very little know-how about database tuning so I fear that I don't know what I would be looking at. Until now I had never heard about checkpoint intervals to give you an idea how little I know about this aspect of IT.
(16 Sep '19, 21:58)
mg218
|