Hi, I want to set up the car profile data, but it is now taking 3 days and still at 100% Graphs (so way more to go) and I am thinking, maybe my swap and stxxl is just set up inappropriately. I have the following system
HD1 has the
HD2 mostly is empty (but the space is going to be needed for some other big data) My extract approach was to set up another swap of 300G on HD2 and stxxl of 300G on HD2. But it seems to be too slow. iotop has some reading and some writing activity on it, but nothing major. htop does not show any significant cpu usage. I start with 8 processes.
Thanks asked 02 Sep '17, 09:10 wordli |
Processing the full planet with the default car profile will take more than 150 GB of RAM. In your case, with 64 GB of RAM and 32 GB of swap, I'd expect it to die at some point; you'd need 96 GB of swap to even make it work and then it would take very long. One way of saving memory is splitting the planet up into disjunct areas, of course at the cost of not being able to route between them later on. Or you could use a different routing software that has a lesser memory footprint (but might then be slower to run). 64 GB of RAM should be enough to run the routing once computed, so another option would be to rent a large-memory Amazon machine for a day and have the extract processed there. answered 04 Sep '17, 21:29 Frederik Ramm ♦ That is unfortunate that you cannot extract on a machine that could actually run the program. But good idea with the outsourced processing. Thanks!
(07 Sep '17, 08:13)
wordli
Note that older versions of OSRM have a smaller memory footprint: depending on the features you require, you may find that 4.9.1 fits your needs.
(07 Sep '17, 15:41)
Richard ♦
|
How large is the input file (.osm.pbf file)? Do you try to process the whole planet?
You might have a look at https://lists.openstreetmap.org/pipermail/osrm-talk/2017-July/001460.html for RAM requirements.
yes, I am processing the whole planet (37G)