Hi all, Unfortunately What I don't understand is that I used --slim --drop options and, after osm2pgsql crash, only 30% of disk space is used (259GB). So I guess the unecessary tables were already dropped when osm2pgsql went out of disk space. I don't think osm2pgsql need more than 700GB, even temporary, to build indexes. I would like to relaunch the import, do you have any advice to use less disk space ? maybe
In fact I got 916GB available, minus planet.osm.pbf file (50GB), so I got ~865GB. If it's confirmed that it is not sufficient for a full planet import, I'll add the information to this post ( Tile server hardware requirements ). Below are more informations, thanks in advance for your help ! Augustin osm2pgsql command
system
postgresql.conf extract (PostgreSQL version: 10)
osm2pgsql logs extract vs. disk space usageOn january 9th at 6pm, osm2pgsql was still processing relations
and 78% of disk space was used:
On january 10th at 8pm osm2pgsql crashed because of lack of disk space (logs below) whereas only 30% disk space is used
here are osm2pgsql logs with the crash:
asked 13 Jan '20, 12:14 augustind |
865 GB is imho not enough anymore, but I'm not using --drop. I would strongly adise to keep the planet.osm.pbf somewhere else as well as using --flat-nodes (about 50 GB in size) and keeping the flat-nodes bin somewhere else (extra drive). I recently did an import with 1.1 TB free space (used for postgresql db + flat-node file), the planet.osm.pbf was on another drive) and that worked (and probably won't work anymore in 2021). You could as well look into the fs you are using. ZFS offers compression and would help. answered 13 Jan '20, 12:24 Spiekerooger Thanks a lot Spiekerooger for your quick answer. I'll try to relaunch the import with your advices:
I'll come back here to report what happened !
(13 Jan '20, 12:53)
augustind
I just had a look in my statistics: with a ZFS filesystem and ZFS compression turned on I maxed at 692 GB incl. the flat-nodes file during import (index building I think) on a recent import (without the --drop option and with autovacuum on). Right now the db plus flat-nodes file takes up about 530GB after the import and after some --append updates. With ext4 and about ~ 900GB free space for db + flat-node file I ran into the same probs you have on a first import (but again, I'm not using the --drop option).
(13 Jan '20, 14:24)
Spiekerooger
Thanks for these benchmarks. So let's go for ZFS. Following Paul’s Blog - ZFS Settings for Osm2pgsql recommendations, I would like to use ZFS to convert my 1TB disk (currently ext4, mounted on /data) with lz4 compression enabled and a 8Kb recordsize. As I'm not very familiar with filesystems and don't know zf4, does somebody could confirm that below approach is approximately correct (source: https://wiki.debian.org/ZFS) ?
(14 Jan '20, 11:37)
augustind
If the /dev/sda (are you sure about that, especially the sda?) is the extra 1TB drive where you plan to keep the db, this sounds about ok. You may have to format the drive before using zpool. Personally I wouldn't even create an exta filesystem under the zfs pool but just create the pool directly as "data". so I would just:
(without the extra zfs create in between). By that, the pool should be mounted under /data directly. But I don't know what else you are planing for that drive, so you may go ahead with your plan.
(14 Jan '20, 12:34)
Spiekerooger
Finally it has worked like below for ZFS compression. For the whole conclusion, see answer at the bottom of the ticket. Thanks. Be careful:
(22 Jan '20, 14:03)
augustind
Ups, yes, actually I'm using set recordsize as well and not ashift. Glad you got it running.
(22 Jan '20, 14:07)
Spiekerooger
showing 5 of 6
show 1 more comments
|
Run your import with Why? You ran out of disk space in the final stage of osm2pgsql where tables are sorted and indexes are created. In order to sort the tables, Postgresql needs to make a complete copy of the table. So you temporarily need twice the size of the table on disk. To speed things up, osm2pgsql runs parallel threads that do the sorting and indexing. At the worst case it therefore sorts all tables in parallel, so that you need twice the disk space you would actually need in the end. Regarding the answered 13 Jan '20, 19:09 lonvia |
Hi, Thanks everybody for your help, the import was successful :) What has been changed for this second import:
Benchmark details here: Wiki OSM / Osm2pgsql Benchmarks / Desktop Debian 9, 4 cores i5-6500 CPU @ 3.20GHz/32GB RAM, 1TB+500GB SSD (hstore slim drop flat-nodes and ZFS filesystem) About PostgreSQL/PostGIS OSM database (on ZFS filesystem): peak usage 460 GB, end size: 185 GB. The source code of the related project has been just released on Github: osmtilemaker (generate your OSM tiles (one shot, no updates).) answered 22 Jan '20, 13:58 augustind |
After @Spiekerooger answer, it could be nice if somebody else could provide some explanations about how the
answered 13 Jan '20, 12:56 augustind |