NOTICE: help.openstreetmap.org is no longer in use from 1st March 2024. Please use the OpenStreetMap Community Forum

I'm running a custom OpenStreetMap instance on my server, with both the standard openstreetmap-carto style and a custom style. The current version up and running in production uses a planet file from 2019-09-12.

I now want to update the data from the latest planet file on my dev server.

I have a script that will:

  • download the latest file
  • tune the database for the import
  • drop and recreate the database
  • import the PBF
  • create the indexes
  • tune the database for production use

The server is a quad-core i7-6700 with 64 GB of RAM, a 1 To SSD (storing the database) and 8 TB of hard drives (storing the tiles).

Here's the script code:

# Define some values:
PBF=~/data/planet-latest.osm.pbf
THREADS=8
CORES=4

# Download latest PBF:
wget https://planet.openstreetmap.org/pbf/planet-latest.osm.pbf -O $PBF

# Drop the databse:
sudo -u postgres dropdb osm

# Recreate the databse:
sudo mkdir -p /ssd/db/osm
sudo chown postgres /ssd/db/osm
sudo -u postgres psql -c "CREATE TABLESPACE osm_tablespace LOCATION '/ssd/db/osm';"
sudo -u postgres createdb -E UTF8 -O myusername -D osm_tablespace osm

# Setup PostGIS:
sudo -u postgres psql osm -c "CREATE EXTENSION postgis;"
sudo -u postgres psql osm -c "CREATE EXTENSION hstore;"
sudo -u postgres psql osm -c "ALTER TABLE geography_columns OWNER TO myusername;"
sudo -u postgres psql osm -c "ALTER TABLE geometry_columns OWNER TO myusername;"
sudo -u postgres psql osm -c "ALTER TABLE raster_columns OWNER TO myusername;"
sudo -u postgres psql osm -c "ALTER TABLE raster_overviews OWNER TO myusername;"
sudo -u postgres psql osm -c "ALTER TABLE spatial_ref_sys OWNER TO myusername;"

# Fine tune for import
if [ ! -f /etc/postgresql/10/main/postgresql.conf.bak ]; then
    sudo cp /etc/postgresql/10/main/postgresql.conf /etc/postgresql/10/main/postgresql.conf.bak
fi

sudo rm /etc/postgresql/10/main/postgresql.conf
sudo cp /etc/postgresql/10/main/postgresql.conf.bak /etc/postgresql/10/main/postgresql.conf

sudo tee -a /etc/postgresql/10/main/postgresql.conf << EOF
synchronous_commit = off        # /!\ Use these for OSM import only
fsync = off                     # /!\ Use these for OSM import only
random_page_cost = 1.1
wal_buffers = 16MB
effective_cache_size = 48GB
maintenance_work_mem = 8GB
shared_buffers = 16GB
work_mem = 4GB
autovacuum = on
effective_io_concurrency = 200
max_connections = 300
checkpoint_completion_target = 0.7
default_statistics_target = 100
min_wal_size = 1GB
max_wal_size = 4GB
max_worker_processes = $THREADS
max_parallel_workers_per_gather = $CORES
max_parallel_workers = $THREADS
EOF

sudo service postgresql restart

# Import the PBF:
mkdir -p /ssd/flat-nodes

osm2pgsql \
    -d osm \
    --create --slim --multi-geometry --hstore --drop \
    --cache 20000 \
    --number-processes 1 \
    --tag-transform-script ~/src/map-styles/openstreetmap-carto/openstreetmap-carto.lua \
    --style ~/src/map-styles/openstreetmap-carto/openstreetmap-carto.style \
    --flat-nodes /ssd/flat-nodes/osm.cache \
    "$PBF"

# Create the indexes:
cd ~/src/map-styles/openstreetmap-carto
psql -d osm -f indexes.sql

# Fine tune for production:
sudo rm /etc/postgresql/10/main/postgresql.conf
sudo cp /etc/postgresql/10/main/postgresql.conf.bak /etc/postgresql/10/main/postgresql.conf

sudo tee -a /etc/postgresql/10/main/postgresql.conf << EOF
#synchronous_commit = off        # /!\ Use these for OSM import only
#fsync = off                     # /!\ Use these for OSM import only
random_page_cost = 1.1
wal_buffers = 16MB
effective_cache_size = 48GB
maintenance_work_mem = 8GB
shared_buffers = 16GB
work_mem = 1GB
autovacuum = on
effective_io_concurrency = 200
max_connections = 300
checkpoint_completion_target = 0.7
default_statistics_target = 100
min_wal_size = 1GB
max_wal_size = 4GB
max_worker_processes = $THREADS
max_parallel_workers_per_gather = $CORES
max_parallel_workers = $THREADS
EOF

I have already successfully used that script in the past. But if I run it now, my map becomes super slow (with both openstreetmap-carto style and mine), like if it was missing all the indexes. Running the script on the 2019-09-12 PBF results in a perfectly usable map.

What could be wrong? Did the OSM database schema change since last year? Do I need to delete the flat-nodes files before importing again?

Thanks in advance

asked 25 Aug '20, 12:42

timautin's gravatar image

timautin
1514512
accept rate: 0%

Can you clarify what exactly is super slow. The script runs and completes within the expected time, and then the map tiles are slow? (Yes you should have deleted the node cache file but it would only slow down the import, not the tile serving.) Are you using renderd or tirex or ...? Did you clean old tiles from your tile cache or are they still there? Is there a planet-import-complete file in your tile directory?

(25 Aug '20, 12:48) Frederik Ramm ♦

Thanks for your comment. By slow I mean that generating my test tile takes 1-2 mn when it should takes 6-8 seconds. The import ran fine (in 24h). I'm using renderd and mapnik, and yes I deleted the old tiles. I have no planet-import-complete in my mod_tile directory, what is that file? I don't think I ever had one, is this new? EDIT: creating the file and restarting Apache + renderd does not change anything.

(25 Aug '20, 12:54) timautin

The planet-import-complete file is not required, i was just asking to better understand your setup.

(25 Aug '20, 13:06) Frederik Ramm ♦

As Jochen Topo suggested in comments, I simply ran out of disk space during the import. Making some room fixed the problem.

It's a 960GB SSD, on which I only had 128 GB of data. So 832 GB of free disk space is no longer enough to create a planet OSM DB. Looks like we'll pretty soon need 2 TB SSDs.

permanent link

answered 29 Aug '20, 18:42

timautin's gravatar image

timautin
1514512
accept rate: 0%

What you are seeing is the normal effect of having deleted your tile cache. Tiles on z0-8 can easily take minutes - tiles in very dense areas might take longer than 10 minutes even with fast disks (remember, it always computes 8x8 tiles at once). The fact that it was faster before was not due to your indexes, but due to the cache which you have deleted.

Try rendering a tile on zoom 18, that should be relatively quick. Then use the appropriate render_list commands to pre-render your area of interest for zoom 0-12 (if the area is not the whole planet, perhaps look at https://github.com/alx77/render_list_geo.pl).

permanent link

answered 25 Aug '20, 13:05

Frederik%20Ramm's gravatar image

Frederik Ramm ♦
82.5k927201273
accept rate: 23%

Sorry I should have added more details: my test tile is a zoom 13 tile, that I use to check that everything works properly after each update. Usually my process is: delete the tile cache, recreate the DB (+ indexes), check that my test tile (13/4231/3002) is still generated in 6-8 secs, and then re-generate the cache. EDIT: an explain analyse shows that the indexes are correctly used. EDIT 2: I have no /ssd/flat-nodes/osm.cache file, is that normal? I don't have one either on the (working) production server.

(25 Aug '20, 13:08) timautin

Ok, I'm missing the planet_osm_polygon_way_idx index, that's very likely the problem (I'm creating it, il will take a while). Isn't it supposed to be created by osm2pgsql?

(25 Aug '20, 14:46) timautin
1

Oh yes, not having that index will certainly slow down rendering a lot! And yes, it should be generated by osm2pgsql. Make sure you have the geometry indexes on the other tables as well.

(25 Aug '20, 15:37) Frederik Ramm ♦

Which version of osm2pgsql are you using?

(25 Aug '20, 15:50) Jochen Topf
3

Older versions of osm2pgsql have a bug where no error is shown if index creation failes. So maybe you ran out of disk space or so and the index wasn't created, but no error was shown. The newest version of osm2pgsql doesn't have this bug any more.

(25 Aug '20, 16:48) Jochen Topf

Thanks guys. Back after a few more investigations: creating the index myself does not solve the problem, and the created index is 25GB vs 37GB on the prod server. I'm thus restarting the whole import process with the osm2pgsql output saved in a file this time (by the way, don't osm2pgsql has a way to output its log to a file?), because on the previous import I didn't notice anything wrong and didn't keep the output. The drive had 755GB available, which I thought was enough, isn't it? I'll make some room before restarting the import. I'm using osm2pgsql v 0.96.0

(25 Aug '20, 18:20) timautin
showing 5 of 6 show 1 more comments

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×341
×263
×105
×33
×22

question asked: 25 Aug '20, 12:42

question was seen: 4,989 times

last updated: 29 Aug '20, 18:42

NOTICE: help.openstreetmap.org is no longer in use from 1st March 2024. Please use the OpenStreetMap Community Forum