GeoWebcache Optimal Deployment Guidelines¶
GeoWebCache (GWC) is a modular tile cache system able to support the most common tile oriented protocols, using any compliant WMS server as a source for tiles, and caching them on disk for speed. GeoWebCache can run either stand-alone as a J2EE web application or as a GeoServer plugin.
- The functionality provided by GWC can be roughly split into four main components:
- Service level: ability to handle WMTS, WMS-C, Google and VirtualEarth tile protocols
- Tile source level: handles connectivity to remote WMS servers to fetch tiles, but allows for plugging other tile sources as well (e.g., the ArcGis tile cache format)
- Core tile caching: ability to store and reuse tiles on a disk cache
- Disk quota: subsystem limiting each tile cache and overall tile cache set disk usage, posing both global and per layer limits
- The components offer the full functionality but have different limitations when it comes to set-up a clustered, round robin, fully read write installation of GWC:
- The protocol handling level poses no issues to clustering, as all the protocols handles are fully stateless easily leaning themselves to a simple round robin clustering.
- The core tile caching subsystem needs to pay attention to avoid double writes, or concurrent read/writes, as different instances of GWC access tiles on disk.
- The disk quota mechanism can either use the Oracle JE embedded Nosql database, an embedded H2 database, or an external relational database such as PostgreSQL or Oracle.
GeoWebCache has a special relationship with GeoServer, in that it is also a standard plugin with a tight integration with the GeoServer GUI and WMS operations making it easier to use and speeding up the creation of new tiles.
The usage of stand-alone versus the integrated solution has pros and cons:
- The workload of a tile cache is completely different from the one of OGC services. It involves an amount of concurrent requests which is orders of magnitude higher, with normally low CPU usage and high disk I/O, from this point of view, having a separate GeoWebCache is preferable to avoid the WMTS load to overwhelm the other OGC services
- The configuration of a separate GeoWebCache instance duplicates the work already done in GeoServer for the layer configuration and requires manual intervention every time a new layer is configured in GeoServer. Moreover, a stand-alone GeoWebCache needs to ask GeoServer for the metatiles in some image format, that it will have to decode, slice in tiles, and save on disk, whilst an integrated version can directly tap into the in memory image generated by GeoServer and perform the slicing directly without having to go through an intermediate encoding/decoding operation. These considerations favor an integrated GeoWebCache approach.
From the point of view of high availability, two or more GeoWebCache instances should be set up in order to share the same disk tile storage, which is required to be located on some network file system. It has to be noted performance wise clustering is not normally required, it is indeed well known that a single GeoWebCache can easily flood a Gigabit line. As a result there is no real need to distribute the load among various nodes, since the gain obtained from clustering GeoWebCache is merely high availability.
GeoWebCache CACHE DIR¶
When possible use an external and shared GeoWebCache CACHE dir.
This can be easily achieved by setting the following environment variable on the Tomcat instances
GEOWEBCACHE_CACHE_DIR=<full_path_to_external_folder>
This folder will contain custom configurations, logs and all the tiles of GeoWebCache.
The size of this folder must be enough to contain the tiles pyramids. GeoWebCache will create 256x256 images for each output format (PNG/JPEG/GIF/…), Coordinate Reference System/Grid and zoom level on all the cached layers, as well for any combination of allowed extra parameters coming it.
The size of the cache can be, in any case, controlled by enabling the disk quota subsystem, which can be clustered properly starting with GWC 1.4.x (GeoServer 2.3.x), and in order to do so, it has to have its configuration stored in a DB centralized disk quota (which should be, in turn, clustered to avoid any single point of failure).
GeoWebCache releases¶
Since GeoWebCache 1.4.x, it received a number of improvements that makes it suitable for active/active clustering, in particular:
- The metastore has been removed and replaces for a mechanism embedded in the tile cache file system itself
- The disk quota subsystem can now use a centralized database, Oracle or PostgreSQL, to store information about disk space usage
- A locking mechanism has been devised allowing multiple processes to coordinate both file system writes and meta-tile requests, thus avoiding data corruption and minimizing duplication of work
Given the above GeoWebCache 1.4.x can be easily deployed as a cluster of stand-alone instances in active/active mode, with all the instances operating at the same time.
Another relevant improvements followed that release and are mentioned here https://github.com/GeoWebCache/geowebcache/blob/main/RELEASE_NOTES.txt , which includes:
- Add support for modern application/vnd.mapbox-vector-tile mapbox vector tiles mime type
- Java 11 support
- Support TMS FlipY
- Allow the S3 storage to work against Cohesity
- Introduce 512px tiles as possible alternative for the default gridsets
- ...
GeoWebCache is integrated also in the GeoServer series, making it also possible to directly use the integrated version and thus just have a single cluster of GeoServer instances.The WMTS workload taking over the servers and leaving little to the other OGC services is a source of concern, but it can be mitigated by the following steps:
- Installing the control-flow plugin in GeoServer
- Use control-flow to limit the amount of GeoWebCache requests that the server will actually handle in parallel
- Increase the Tomcat thread pool to a high number (e.g. 2000) and reduce to zero the requests queue
The last point deserves an explanation. Tomcat by default uses 200 threads tops to serve the requests, if more parallel requests come in they will be stored in a queue whose size is by any practical means infinite (see https://tomcat.apache.org/tomcat-9.0-doc/config/executor.html). This means that GeoServer, and in particular the control-flow subsystem, get to see only the 200 requests currently running, and cannot effectively prioritize the WMS/WFS requests.
By setting the thread pool to a large size we allow the control-flow to see all of the incoming requests and apply the rules that will properly prioritize the different request types. The requests queue set to zero will make Tomcat refuse further connections, which is not going to be a problem if the thread pool size is large enough to allow for the maximum expected load anyway (a server under excessive load should start dropping incoming requests anyway to protect its own stability).