Hello, I've been trying to improve reverse geocoding queries speed and during the action I've noticed that many tables are not used in the process. So, hence are the 2 questions: 1) What tables can be safely removed/truncated to minimize disk usage? Of course, it would be great to have possibility to keep the DB up to date (even if i need to truncate aforementioned tables from time to time) 2) Is there any way to speed up Nominatim queries? From what I can see so far:
asked 03 Jun '16, 14:41 Taras O |
Hello, if you want to disable logging to new_query_log answered 28 Jun '16, 11:37 Mike Sirs Thank you!
(13 Nov '18, 21:39)
Taras O
|
Hello @Taras, Can you describe more detail about loading indices/table into system cache ?
In my case, I need geocoder in my country only and its data is rather small so I think if I can cache main table into memory It will improve performance a lot but I can't find any guidelines about this
Please do not ask or expand questions in answers, either use comments or edit your original question.
@Bui Khanh, yes, basically when you access a file it gets into system cache (or rather part of it that was read), the idea is to determine what files you need (depends on your use case) and put them into cache. For me the tables were:
idx_place_id
idx_placex_geometry
idx_place_addressline_place_id
placex
So, for me finding the tables and reading the files once made the trick. I used pg_relation_filepath() function to determine the files and then used vmtouch tool to put them in memory. I hope this helps!