Hi there I want to build a full planet map tiles service that should serve a lot of traffic, something like 300M requests/day. I thought about:
Since I want to run it on a cloud provider (google/aws), is there a way to share-cache between tiles server so that they can auto-scale or even run on spot instances? (thus can be terminated in a 2-3 minutes heads up) asked 09 Sep '19, 08:00 LiorM |
The bottleneck when rendering new tiles is the PostGIS database and its disk storage, not the actual rendering of tiles. Therefore I am not sure if scaling up the number of tile servers but not scaling up the PostGIS database is any good. A more promising approach in my opinion would be to have a fast local PostGIS database on the tile servers that may be fed from one master PostGIS using Slony or another PostgreSQL replication mechanism. That would give you a startup time for a new tile server of a couple of hours (better than the ~16 hours it would require to actually import a full database) AND you'd be distributing the database load. But YMMV - if you expect most of those 300m tiles/day to be for the same hot spots then the CDN and the tile server's cache will already take away most of the load. To answer your question, mod_tile and renderd support "pluggable storage backends" (https://lists.openstreetmap.org/pipermail/dev/2013-March/026689.html) which is something you might want to explore, however I am not aware of anyone using that in production. answered 09 Sep '19, 09:41 Frederik Ramm ♦ So, assuming that I have a machine image (AMI) with PostGIS + Render on it, how can I put (and scale) more than 1 instances behind a Load Balancer and make them autoscale? the Load Balancer will send a request to each render at a time thus creating same files on the renders...how can I share the cache between of all them (metafiles) ?
(09 Sep '19, 09:58)
LiorM
Options:
(09 Sep '19, 11:19)
Frederik Ramm ♦
|