We have a problem importing north-america-latest.osm to out map server VM which is 6 vCPU, 16 GB RAM and 550 GB storage.
We tried to import Texas pbf, it works fine.
And when we used these two different commands and in both case, the import failed with the coredump.
osm2pgsql --slim -d gis -C 4000 /mapdata/north-america-latest.osm.pbf
osm2pgsql --slim -d gis –C 1200 –number-processes 3 /tmp/north-america-latest.osm.pbf
HINT: In a moment you should be able to reconnect to the database and repeat your command.
terminate called after throwing an instance of 'std::runtime_error'
what(): CREATE INDEX planet_osm_ways_nodes ON planet_osm_ways USING gin (nodes) WITH (FASTUPDATE=OFF) ;
failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Aborted (core dumped)
Then we tried to increase the "-C | --cache num" value to 12000 and 14000. Then in both cases, the import failed at the same step as below:
bash-4.1$ osm2pgsql -d gis --slim -C 12000 /mapdata/north-america-latest.osm.pbf
osm2pgsql SVN version 0.88.2-dev (64 bit id space)
Using built-in tag processing pipeline
Using projection SRS 900913 (Spherical Mercator)
Setting up table: planet_osm_point
Setting up table: planet_osm_line
Setting up table: planet_osm_polygon
Setting up table: planet_osm_roads
Allocating memory for dense node cache
Allocating dense node cache in one big chunk
Allocating memory for sparse node cache
Sharing dense sparse
Node-cache: cache=12000MB, maxblocks=1536000*8192, allocation method=11
Mid: pgsql, scale=100 cache=12000
Setting up table: planet_osm_nodes
Setting up table: planet_osm_ways
Setting up table: planet_osm_rels
Reading in file: /mapdata/north-america-latest.osm.pbf
Processing: Node(870095k 259.3k/s) Way(61241k 14.44k/s) Relation(509320 68.81/s)
Standard exception processing relation id=6244046: TopologyException: side location conflict at -8427146.1899999995 5064220.8799999999
Processing: Node(870095k 259.3k/s) Way(61241k 14.44k/s) Relation(566160 71.63/s) parse time: 15501s
Node stats: total(870095450), max(4515939386) in 3356s
Way stats: total(61241977), max(455153763) in 4240s
Relation stats: total(566169), max(6729780) in 7905s
Committing transaction for planet_osm_point
Committing transaction for planet_osm_line
Committing transaction for planet_osm_polygon
Committing transaction for planet_osm_roads
Setting up table: planet_osm_nodes
Setting up table: planet_osm_ways
Setting up table: planet_osm_rels
Using built-in tag processing pipeline
Going over pending ways...
33693166 ways are pending
Using 1 helper-processes
Finished processing 33693166 ways in 20391 sec
33693166 Pending ways took 20391s at a rate of 1652.35/s
Committing transaction for planet_osm_point
Committing transaction for planet_osm_line
Committing transaction for planet_osm_polygon
Committing transaction for planet_osm_roads
Going over pending relations...
0 relations are pending
Using 1 helper-processes
Finished processing 0 relations in 0 sec
Committing transaction for planet_osm_point
WARNING: there is no transaction in progress
Committing transaction for planet_osm_line
WARNING: there is no transaction in progress
Committing transaction for planet_osm_polygon
WARNING: there is no transaction in progress
Committing transaction for planet_osm_roads
WARNING: there is no transaction in progress
Stopping table: planet_osm_nodes
Stopping table: planet_osm_ways
Building index on table: planet_osm_ways
Stopped table: planet_osm_nodes in 0s
Stopping table: planet_osm_rels
Building index on table: planet_osm_rels
Sorting data and creating indexes for planet_osm_polygon
Sorting data and creating indexes for planet_osm_point
Sorting data and creating indexes for planet_osm_line
Sorting data and creating indexes for planet_osm_roads
Stopped table: planet_osm_rels in 28s
Killed
We checked the postgresql.conf, and we have the setting as below:
shared_buffers = 256MB
work_mem = 512MB
maintenance_work_mem = 16GB
checkpoint_segments = 100
checkpoint_timeout = 10min
checkpoint_completion_target = 0.9
effective_cache_size = 24GB
Any suggestion on how to fix this issue?
asked
16 Oct '18, 23:30
xming0819
11●1●1●2
accept rate:
0%