Martin is a tile server able to generate and serve vector tiles on the fly from large PostGIS databases, PMTiles (local or remote), and MBTiles files, allowing multiple tile sources to be dynamically combined into one. Martin optimizes for speed and heavy traffic, and is written in Rust.
See also Martin demo site
Martin Quick Start Guide
Choose your operating system to get started with Martin tile server
Quick start on Linux
mkdir martin
cd martin
# Download some sample data
curl -L -O https://github.com/maplibre/martin/raw/main/tests/fixtures/mbtiles/world_cities.mbtiles
# Download the latest version of Martin binary, extract it, and make it executable
curl -L -O https://github.com/maplibre/martin/releases/latest/download/martin-x86_64-unknown-linux-gnu.tar.gz
tar -xzf martin-x86_64-unknown-linux-gnu.tar.gz
chmod +x ./martin
# Show Martin help screen
./martin --help
# Run Martin with the sample data as the only tile source
./martin world_cities.mbtiles
View the map
See quick start with QGIS for instructions on how to view the map.
Quick start on macOS
-
Download some demo tiles.
-
Download the latest version of Martin from the release page. Use about this Mac to find your processors type.
- Use martin-x86_64-apple-darwin.tar.gz for Intel
- Use martin-aarch64-apple-darwin.tar.gz for M1
-
Extract content of both files and place them in a same directory.
-
Open the command prompt and navigate to the directory where
martin
andworld_cities.mbtiles
are located. -
Run the following command to start Martin with the demo data:
# Show Martin help screen
./martin --help
# Run Martin with the sample data as the only tile source
./martin world_cities.mbtiles
View the map
See quick start with QGIS for instructions on how to view the map.
Quick start on Windows
-
Download some demo tiles.
-
Download the latest Windows version of Martin from the release page: martin-x86_64-pc-windows-msvc.zip
-
Extract content of both files and place them in a same directory.
-
Open the command prompt and navigate to the directory where
martin
andworld_cities.mbtiles
are located. -
Run the following command to start Martin with the demo data:
# Show Martin help screen
martin --help
# Run Martin with the sample data as the only tile source
martin world_cities.mbtiles
View the map
See quick start with QGIS for instructions on how to view the map.
View map with QGIS
-
Download, install, and run QGIS for your platform
-
Add a new
Vector Tiles
connection -
In the
Vector Tile Connection
dialog, give it some name and the URL of the Martin server, e.g.http://localhost:3000/world_cities/{z}/{x}/{y}
and clickOK
. -
In the QGIS browser panel (left), double-click the newly added connection, or right-click it and click on
Add Layer to Project
. -
The map should now be visible in the QGIS map view.
Prerequisites
If using Martin with PostgreSQL database, you must install PostGIS with at least v3.0+. Postgis v3.1+ is recommended.
Docker
Martin is also available as a Docker image. You could either share a configuration
file from the host with the container via the -v
param, or you can let Martin auto-discover all sources e.g. by
passing DATABASE_URL
or specifying the .mbtiles/.pmtiles files or URLs to .pmtiles.
export PGPASSWORD=postgres # secret!
docker run -p 3000:3000 \
-e PGPASSWORD \
-e DATABASE_URL=postgresql://user@host:port/db \
-v /path/to/config/dir:/config \
ghcr.io/maplibre/martin --config /config/config.yaml
From Binary Distributions Manually
You can download martin from GitHub releases page.
Platform | x64 | ARM-64 |
---|---|---|
Linux | .tar.gz (gnu) .tar.gz (musl) .deb | .tar.gz (musl) |
macOS | .tar.gz | .tar.gz |
Windows | .zip |
Rust users can install pre-built martin binary
with cargo-binstall and cargo
.
cargo install cargo-binstall
cargo binstall martin
martin --help
From package
To install with apt source and others, We need your help to improve packaging for various platforms.
Homebrew
If you are using macOS and Homebrew you can install martin using Homebrew tap.
brew tap maplibre/martin
brew install martin
martin --help
Debian Packages(x86_64) manually
curl -O https://github.com/maplibre/martin/releases/latest/download/martin-Debian-x86_64.deb
sudo dpkg -i ./martin-Debian-x86_64.deb
martin --help
rm ./martin-Debian-x86_64.deb
Building From source
If you install Rust, you can build martin from source with Cargo:
cargo install martin --locked
martin --help
Usage
Martin requires at least one PostgreSQL connection string or a tile source file
as a command-line argument. A PG connection string can also be passed via the DATABASE_URL
environment variable.
martin postgresql://postgres@localhost/db
Martin provides TileJSON endpoint for each geospatial-enabled table in your database.
Command-line Interface
You can configure Martin using command-line interface. See martin --help
or cargo run -- --help
for more
information.
Usage: martin [OPTIONS] [CONNECTION]...
Arguments:
[CONNECTION]...
Connection strings, e.g. postgres://... or /path/to/files
Options:
-c, --config <CONFIG>
Path to config file. If set, no tile source-related parameters are allowed
--save-config <SAVE_CONFIG>
Save resulting config to a file or use "-" to print to stdout. By default, only print if sources are auto-detected
-C, --cache-size <CACHE_SIZE>
Main cache size (in MB)
-s, --sprite <SPRITE>
Export a directory with SVG files as a sprite source. Can be specified multiple times
-f, --font <FONT>
Export a font file or a directory with font files as a font source (recursive). Can be specified multiple times
-k, --keep-alive <KEEP_ALIVE>
Connection keep alive timeout. [DEFAULT: 75]
-l, --listen-addresses <LISTEN_ADDRESSES>
The socket address to bind. [DEFAULT: 0.0.0.0:3000]
--base-path <BASE_PATH>
Set TileJSON URL path prefix. This overides the default of respecting the X-Rewrite-URL header.
Only modifies the JSON (TileJSON) returned, martins' API-URLs remain unchanged. If you need to rewrite URLs, please use a reverse proxy.
Must begin with a `/`.
Examples: `/`, `/tiles`
-W, --workers <WORKERS>
Number of web server workers
--preferred-encoding <PREFERRED_ENCODING>
Martin server preferred tile encoding. If the client accepts multiple compression formats, and the tile source is not pre-compressed, which compression should be used. `gzip` is faster, but `brotli` is smaller, and may be faster with caching. Defaults to gzip
[possible values: brotli, gzip]
-u, --webui <WEB_UI>
Control Martin web UI. Disabled by default
Possible values:
- disable: Disable Web UI interface. This is the default, but once implemented, the default will be enabled for localhost
- enable-for-all: Enable Web UI interface on all connections
-b, --auto-bounds <AUTO_BOUNDS>
Specify how bounds should be computed for the spatial PG tables. [DEFAULT: quick]
Possible values:
- quick: Compute table geometry bounds, but abort if it takes longer than 5 seconds
- calc: Compute table geometry bounds. The startup time may be significant. Make sure all GEO columns have indexes
- skip: Skip bounds calculation. The bounds will be set to the whole world
--ca-root-file <CA_ROOT_FILE>
Loads trusted root certificates from a file. The file should contain a sequence of PEM-formatted CA certificates
-d, --default-srid <DEFAULT_SRID>
If a spatial PG table has SRID 0, then this default SRID will be used as a fallback
-p, --pool-size <POOL_SIZE>
Maximum Postgres connections pool size [DEFAULT: 20]
-m, --max-feature-count <MAX_FEATURE_COUNT>
Limit the number of features in a tile from a PG table source
-h, --help
Print help (see a summary with '-h')
-V, --version
Print version
Environment Variables
You can also configure Martin using environment variables, but only if the configuration file is not used. See configuration section on how to use environment variables with config files. See also SSL configuration section below.
Environment var Config File key | Example | Description |
---|---|---|
DATABASE_URL connection_string | postgresql://postgres@localhost/db | Postgres database connection |
DEFAULT_SRID default_srid | 4326 | If a PostgreSQL table has a geometry column with SRID=0, use this value instead |
PGSSLCERT ssl_cert | ./postgresql.crt | A file with a client SSL certificate. docs |
PGSSLKEY ssl_key | ./postgresql.key | A file with the key for the client SSL certificate. docs |
PGSSLROOTCERT ssl_root_cert | ./root.crt | A file with trusted root certificate(s). The file should contain a sequence of PEM-formatted CA certificates. docs |
AWS_LAMBDA_RUNTIME_API | If defined, connect to AWS Lambda to handle requests. The regular HTTP server is not used. See Running in AWS Lambda |
Running with Docker
You can use official Docker image ghcr.io/maplibre/martin
Using Non-Local PostgreSQL
docker run \
-p 3000:3000 \
-e DATABASE_URL=postgresql://postgres@postgres.example.org/db \
ghcr.io/maplibre/martin
Exposing Local Files
You can expose local files to the Docker container using the -v
flag.
docker run \
-p 3000:3000 \
-v /path/to/local/files:/files \
ghcr.io/maplibre/martin /files
Accessing Local PostgreSQL on Linux
If you are running PostgreSQL instance on localhost
, you have to change network settings to allow the Docker container
to access the localhost
network.
For Linux, add the --net=host
flag to access the localhost
PostgreSQL service. You would not need to export ports
with -p
because the container is already using the host network.
docker run \
--net=host \
-e DATABASE_URL=postgresql://postgres@localhost/db \
ghcr.io/maplibre/martin
Accessing Local PostgreSQL on macOS
For macOS, use host.docker.internal
as hostname to access the localhost
PostgreSQL service.
docker run \
-p 3000:3000 \
-e DATABASE_URL=postgresql://postgres@host.docker.internal/db \
ghcr.io/maplibre/martin
Accessing Local PostgreSQL on Windows
For Windows, use docker.for.win.localhost
as hostname to access the localhost
PostgreSQL service.
docker run \
-p 3000:3000 \
-e DATABASE_URL=postgresql://postgres@docker.for.win.localhost/db \
ghcr.io/maplibre/martin
Running with Docker Compose
You can use example docker-compose.yml
file as a reference
services:
martin:
image: ghcr.io/maplibre/martin:v0.13.0
restart: unless-stopped
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgresql://postgres:password@db/db
depends_on:
- db
db:
image: postgis/postgis:16-3.4-alpine
restart: unless-stopped
environment:
- POSTGRES_DB=db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
volumes:
# persist PostgreSQL data in a local directory outside of the docker container
- ./pg_data:/var/lib/postgresql/data
First, you need to start db
service
docker compose up -d db
Then, after db
service is ready to accept connections, you can start martin
docker compose up -d martin
By default, Martin will be available at localhost:3000
Official Docker image includes a HEALTHCHECK
instruction which will be used by Docker Compose. Note that Compose won’t restart unhealthy containers. To monitor and restart unhealthy containers you can use Docker Autoheal.
Using with NGINX
You can run Martin behind NGINX proxy, so you can cache frequently accessed tiles and reduce unnecessary pressure on the database. Here is an example docker-compose.yml
file that runs Martin with NGINX and PostgreSQL.
version: '3'
services:
nginx:
image: nginx:alpine
restart: unless-stopped
ports:
- "80:80"
volumes:
- ./cache:/var/cache/nginx
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- martin
martin:
image: maplibre/martin:v0.7.0
restart: unless-stopped
environment:
- DATABASE_URL=postgresql://postgres:password@db/db
depends_on:
- db
db:
image: postgis/postgis:14-3.3-alpine
restart: unless-stopped
environment:
- POSTGRES_DB=db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
volumes:
- ./pg_data:/var/lib/postgresql/data
You can find an example NGINX configuration file here.
Rewriting URLs
If you are running Martin behind NGINX proxy, you may want to rewrite the request URL to properly handle tile URLs in TileJSON.
location ~ /tiles/(?<fwd_path>.*) {
proxy_set_header X-Rewrite-URL $uri;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_pass http://martin:3000/$fwd_path$is_args$args;
}
Caching tiles
You can also use NGINX to cache tiles. In the example, the maximum cache size is set to 10GB, and caching time is set to 1 hour for responses with codes 200, 204, and 302 and 1 minute for responses with code 404.
http {
...
proxy_cache_path /var/cache/nginx/
levels=1:2
max_size=10g
use_temp_path=off
keys_zone=tiles_cache:10m;
server {
...
location ~ /tiles/(?<fwd_path>.*) {
proxy_set_header X-Rewrite-URL $uri;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_cache tiles_cache;
proxy_cache_lock on;
proxy_cache_revalidate on;
# Set caching time for responses
proxy_cache_valid 200 204 302 1h;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://martin:3000/$fwd_path$is_args$args;
}
}
}
You can find an example NGINX configuration file here.
Using with Apache
You can run Martin behind Apache “kind of” proxy, so you can use HTTPs with it. Here is an example of the configuration file that runs Martin with Apache.
First you have to setup a virtual host that is working on the port 443.
Enable necessary modules
Ensure the required modules are enabled:
sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod headers
sudo a2enmod rewrite
Modify your VHOST configuration
Open your VHOST configuration file for the domaine you’re using, mydomain.tld :
sudo nano /etc/apache2/sites-available/mydomain.tld.conf
Update the configuration
<VirtualHost *:443>
ServerName mydomain.tld
ServerAdmin webmaster@localhost
DocumentRoot /var/www/mydomain
ProxyPreserveHost On
RewriteEngine on
RewriteCond %{REQUEST_URI} ^/tiles/(.*)$
RewriteRule ^/tiles/(.*)$ http://localhost:3000/tiles/$1 [P,L]
<IfModule mod_headers.c>
RequestHeader set X-Forwarded-Proto "https"
</IfModule>
ProxyPass / http://localhost:3000/
ProxyPassReverse / http://localhost:3000/
</VirtualHost>
Check Configuration: Verify the Apache configuration for syntax errors
sudo apache2ctl configtest
Restart Apache: If the configuration is correct, restart Apache to apply the changes
sudo systemctl restart apache2
Using with AWS Lambda - v0.14+
Martin can run in AWS Lambda. This is useful if you want to serve tiles from a serverless environment, while accessing “nearby” data from a PostgreSQL database or PMTiles file in S3, without exposing the raw file to the world to prevent download abuse and improve performance.
Lambda has two deployment models: zip file and container-based. When using zip file deployment, there is an online code editor to edit the yaml configuration. When using container-based deployment, we can pass our configuration on the command line or environment variables.
Everything can be performed via AWS CloudShell, or you can install the AWS CLI and the AWS SAM CLI, and configure authentication. The CloudShell also runs in a particular AWS region.
Container deployment
Lambda images must come from a public or private ECR registry. Pull the image from GHCR and push it to ECR.
$ docker pull ghcr.io/maplibre/martin:latest --platform linux/arm64
$ aws ecr create-repository --repository-name martin
[…]
"repositoryUri": "493749042871.dkr.ecr.us-east-2.amazonaws.com/martin",
# Read the repositoryUri which includes your account number
$ docker tag ghcr.io/maplibre/martin:latest 493749042871.dkr.ecr.us-east-2.amazonaws.com/martin:latest
$ aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin 493749042871.dkr.ecr.us-east-2.amazonaws.com
$ docker push 493749042871.dkr.ecr.us-east-2.amazonaws.com/martin:latest
Open Lambda console and create your function:
- Click “Create function”.
- Choose “Container image”.
- Put something in “Function name”.
- Note: This is an internal identifier, not exposed in the function URL.
- Click “Browse images”, and select your repository and the tag.
- If you cannot find it, see if you are in the same region?
- Expand “Container image overrides”, and under CMD put the URL of a
.pmtiles
file. - Set “Architecture” to
arm64
to match the platform that we pulled. Lambda has better ARM CPUs than x86. - Click “Create function”.
- Find the “Configuration” tab, select “Function URL”, “Create function URL”.
- Set “Auth type” to
NONE
- Do not enable
CORS
. Martin already hasCORS
support, so it will create incorrect duplicate headers.
- Do not enable
- Click on the “Function URL”.
- To debug an issue, open the “Monitor” tab, “View CloudWatch logs”, find the most recent Log stream.
Zip deployment
It’s possible to deploy the entire codebase from the AWS console, but we will use Serverless Application Model. Our function will consist of a “Layer”, containing the Martin binary, and our function itself will contain the configuration in yaml format.
The layer
Download the binary and place it in your staging directory. The bin
directory of your Layer will be added to the PATH.
mkdir -p martin_layer/src/bin/
cd martin_layer
curl -OL https://github.com/maplibre/martin/releases/latest/download/martin-aarch64-unknown-linux-musl.tar.gz
tar -C src/bin/ -xzf martin-aarch64-unknown-linux-musl.tar.gz martin
Every zip-based Lambda function runs a file called bootstrap
.
cat <<EOF >src/bootstrap
#!/bin/sh
set -eu
exec martin --config \${_HANDLER}.yaml
EOF
Write the SAM template.
cat <<EOF >template.yaml
AWSTemplateFormatVersion: 2010-09-09
Transform: 'AWS::Serverless-2016-10-31'
Resources:
martin:
Type: 'AWS::Serverless::LayerVersion'
DeletionPolicy: Delete
Properties:
ContentUri: src
CompatibleRuntimes:
- provided.al2023
CompatibleArchitectures:
- arm64
Outputs:
LayerArn:
Value: !Ref MartinLayer
Export:
Name: !Sub "${AWS::StackName}-LayerArn"
EOF
Run sam deploy --guided
.
- Stack Name: Name your CloudFormation stack something like
martin-layer
. - Press enter for everything else
- The settings are saved to
samconfig.toml
, so you can later dosam deploy
to update the version, orsam delete
.
Now if you visit the Lambda console and select “Layers”, you should see your layer.
The function
- Select “Functions”, “Create function”.
- Put something in “Function name”.
- Set “Runtime” to “Amazon Linux 2023”.
- Set “Architecture” to “arm64”.
- Under “Advanced settings”, choose “Enable function URL” with “Auth type” of “NONE”.
- Click “Create function”.
Add your layer:
- Click “add a layer” (green banner at the top, or the very bottom).
- Choose “Custom layers”, and select your layer and its version.
- Click “Add”.
Add your configuration file in the function source code:
-
Code tab, File, New File:
hello.handler.yaml
.pmtiles: sources: demotiles: <url to a pmtiles file>
-
Click Deploy, wait for the success banner, and visit your function URL.
TODO
AWS Lambda support is preliminary; there are features to add to Martin, configuration to tweak, and documentation to improve. Your help is welcome.
- Lambda has a default timeout of 3 seconds, and 128 MB of memory, maybe this is suboptimal.
- Document how to connect to a PostgreSQL database on RDS.
- Set up a CloudFront CDN, this is a whole thing, but explain the motivation and the basics.
- Grant the execution role permission to read objects from an S3 bucket, and teach Martin how to make authenticated requests to S3.
- Teach Martin how to serve all PMTiles files from an S3 bucket rather than having to list them at startup.
- Teach Martin how to set the Cache-Control and Etag headers for better defaults.
Troubleshooting
Log levels are controlled on a per-module basis, and by default all logging is disabled except for errors. Logging is
controlled via the RUST_LOG
environment variable. The value of this environment variable is a comma-separated list of
logging directives.
This will enable debug logging for all modules:
export RUST_LOG=debug
martin postgresql://postgres@localhost/db
While this will only enable verbose logging for the actix_web
module and enable debug logging for the martin
and tokio_postgres
modules:
export RUST_LOG=actix_web=info,martin=debug,tokio_postgres=debug
martin postgresql://postgres@localhost/db
Configuration File
If you don’t want to expose all of your tables and functions, you can list your sources in a configuration file. To
start Martin with a configuration file you need to pass a path to a file with a --config
argument. Config files may
contain environment variables, which will be expanded before parsing. For example, to use MY_DATABASE_URL
in your
config file: connection_string: ${MY_DATABASE_URL}
, or with a
default connection_string: ${MY_DATABASE_URL:-postgresql://postgres@localhost/db}
martin --config config.yaml
You may wish to auto-generate a config file with --save-config
argument. This will generate a config yaml file with
all of your configuration, which you can edit to remove any sources you don’t want to expose.
martin ... ... ... --save-config config.yaml
Config Example
# Connection keep alive timeout [default: 75]
keep_alive: 75
# The socket address to bind [default: 0.0.0.0:3000]
listen_addresses: '0.0.0.0:3000'
# Set TileJSON URL path prefix. This overides the default of respecting the X-Rewrite-URL header.
# Only modifies the JSON (TileJSON) returned, martins' API-URLs remain unchanged. If you need to rewrite URLs, please use a reverse proxy.
# Must begin with a `/`.
# Examples: `/`, `/tiles`
base_path: /tiles
# Number of web server workers
worker_processes: 8
# Amount of memory (in MB) to use for caching tiles [default: 512, 0 to disable]
cache_size_mb: 1024
# If the client accepts multiple compression formats, and the tile source is not pre-compressed, which compression should be used. `gzip` is faster, but `brotli` is smaller, and may be faster with caching. Default could be different depending on Martin version.
preferred_encoding: gzip
# Enable or disable Martin web UI. At the moment, only allows `enable-for-all` which enables the web UI for all connections. This may be undesirable in a production environment. [default: disable]
web_ui: disable
# Database configuration. This can also be a list of PG configs.
postgres:
# Database connection string. You can use env vars too, for example:
# $DATABASE_URL
# ${DATABASE_URL:-postgresql://postgres@localhost/db}
connection_string: 'postgresql://postgres@localhost:5432/db'
# Same as PGSSLCERT for psql
ssl_cert: './postgresql.crt'
# Same as PGSSLKEY for psql
ssl_key: './postgresql.key'
# Same as PGSSLROOTCERT for psql
ssl_root_cert: './root.crt'
# If a spatial table has SRID 0, then this SRID will be used as a fallback
default_srid: 4326
# Maximum Postgres connections pool size [default: 20]
pool_size: 20
# Limit the number of table geo features included in a tile. Unlimited by default.
max_feature_count: 1000
# Control the automatic generation of bounds for spatial tables [default: quick]
# 'calc' - compute table geometry bounds on startup.
# 'quick' - same as 'calc', but the calculation will be aborted if it takes more than 5 seconds.
# 'skip' - do not compute table geometry bounds on startup.
auto_bounds: skip
# Enable automatic discovery of tables and functions.
# You may set this to `false` to disable.
auto_publish:
# Optionally limit to just these schemas
from_schemas:
- public
- my_schema
# Here we enable both tables and functions auto discovery.
# You can also enable just one of them by not mentioning the other,
# or setting it to false. Setting one to true disables the other one as well.
# E.g. `tables: false` enables just the functions auto-discovery.
tables:
# Optionally set how source ID should be generated based on the table's name, schema, and geometry column
source_id_format: 'table.{schema}.{table}.{column}'
# Add more schemas to the ones listed above
from_schemas: my_other_schema
# A table column to use as the feature ID
# If a table has no column with this name, `id_column` will not be set for that table.
# If a list of strings is given, the first found column will be treated as a feature ID.
id_columns: feature_id
# Boolean to control if geometries should be clipped or encoded as is, optional, default to true
clip_geom: true
# Buffer distance in tile coordinate space to optionally clip geometries, optional, default to 64
buffer: 64
# Tile extent in tile coordinate space, optional, default to 4096
extent: 4096
functions:
# Optionally set how source ID should be generated based on the function's name and schema
source_id_format: '{schema}.{function}'
# Associative arrays of table sources
tables:
table_source_id:
# ID of the MVT layer (optional, defaults to table name)
layer_id: table_source
# Table schema (required)
schema: public
# Table name (required)
table: table_source
# Geometry SRID (required)
srid: 4326
# Geometry column name (required)
geometry_column: geom
# Feature id column name
id_column: ~
# An integer specifying the minimum zoom level
minzoom: 0
# An integer specifying the maximum zoom level. MUST be >= minzoom
maxzoom: 30
# The maximum extent of available map tiles. Bounds MUST define an area
# covered by all zoom levels. The bounds are represented in WGS:84
# latitude and longitude values, in the order left, bottom, right, top.
# Values may be integers or floating point numbers.
bounds: [ -180.0, -90.0, 180.0, 90.0 ]
# Tile extent in tile coordinate space
extent: 4096
# Buffer distance in tile coordinate space to optionally clip geometries
buffer: 64
# Boolean to control if geometries should be clipped or encoded as is
clip_geom: true
# Geometry type
geometry_type: GEOMETRY
# List of columns, that should be encoded as tile properties (required)
properties:
gid: int4
# Associative arrays of function sources
functions:
function_source_id:
# Schema name (required)
schema: public
# Function name (required)
function: function_zxy_query
# An integer specifying the minimum zoom level
minzoom: 0
# An integer specifying the maximum zoom level. MUST be >= minzoom
maxzoom: 30
# The maximum extent of available map tiles. Bounds MUST define an area
# covered by all zoom levels. The bounds are represented in WGS:84
# latitude and longitude values, in the order left, bottom, right, top.
# Values may be integers or floating point numbers.
bounds: [ -180.0, -90.0, 180.0, 90.0 ]
# Publish PMTiles files from local disk or proxy to a web server
pmtiles:
paths:
# scan this whole dir, matching all *.pmtiles files
- /dir-path
# specific pmtiles file will be published as a pmt source (filename without extension)
- /path/to/pmt.pmtiles
# A web server with a PMTiles file that supports range requests
- https://example.org/path/tiles.pmtiles
sources:
# named source matching source name to a single file
pm-src1: /path/to/pmt.pmtiles
# A named source to a web server with a PMTiles file that supports range requests
pm-web2: https://example.org/path/tiles.pmtiles
# Publish MBTiles files
mbtiles:
paths:
# scan this whole dir, matching all *.mbtiles files
- /dir-path
# specific mbtiles file will be published as mbtiles2 source
- /path/to/mbtiles.mbtiles
sources:
# named source matching source name to a single file
mb-src1: /path/to/mbtiles1.mbtiles
# Sprite configuration
sprites:
paths:
# all SVG files in this dir will be published as a "my_images" sprite source
- /path/to/my_images
sources:
# SVG images in this directory will be published as a "my_sprites" sprite source
my_sprites: /path/to/some_dir
# Font configuration
fonts:
# A list of *.otf, *.ttf, and *.ttc font files and dirs to search recursively.
- /path/to/font/file.ttf
- /path/to/font_dir
PostgreSQL Connection String
Martin supports many of the PostgreSQL connection string settings such as host
, port
, user
, password
, dbname
, sslmode
, connect_timeout
, keepalives
, keepalives_idle
, etc. See the PostgreSQL docs for more details.
PostgreSQL SSL Connections
Martin supports PostgreSQL sslmode
including disable
, prefer
, require
, verify-ca
and verify-full
modes as described in the PostgreSQL docs. Certificates can be provided in the configuration file, or can be set using the same env vars as used for psql
. When set as env vars, they apply to all PostgreSQL connections. See environment vars section for more details.
By default, sslmode
is set to prefer
which means that SSL is used if the server supports it, but the connection is not aborted if the server does not support it. This is the default behavior of psql
and is the most compatible option. Use the sslmode
param to set a different sslmode
, e.g. postgresql://user:password@host/db?sslmode=require
.
Table Sources
Table Source is a database table which can be used to query vector tiles. If a PostgreSQL connection string is given, Martin will publish all tables as data sources if they have at least one geometry column. If geometry column SRID is 0, a default SRID must be set, or else that geo-column/table will be ignored. All non-geometry table columns will be published as vector tile feature tags (properties).
Modifying Tilejson
Martin will automatically generate a TileJSON
manifest for each table source. It will contain the name
, description
, minzoom
, maxzoom
, bounds
and vector_layer
information.
For example, if there is a table public.table_source
:
the default TileJSON
might look like this (note that URL will be automatically adjusted to match the request host):
The table:
CREATE TABLE "public"."table_source" ( "gid" int4 NOT NULL, "geom" "public"."geometry" );
The TileJSON:
{
"tilejson": "3.0.0",
"tiles": [
"http://localhost:3000/table_source/{z}/{x}/{y}"
],
"vector_layers": [
{
"id": "table_source",
"fields": {
"gid": "int4"
}
}
],
"bounds": [
-2.0,
-1.0,
142.84131509869133,
45.0
],
"description": "public.table_source.geom",
"name": "table_source"
}
By default the description
and name
is database identifies about this table, and the bounds is queried from database. You can fine tune these by adjusting auto_publish
section in configuration file.
TileJSON in SQL Comments
Other than adjusting auto_publish
section in configuration file, you can fine tune the TileJSON
on the database side directly: Add a valid JSON as an SQL comment on the table.
Martin will merge table comment into the generated TileJSON using JSON Merge patch. The following example update description and adds attribution, version, foo(even a nested DIY field) fields to the TileJSON.
DO $do$ BEGIN
EXECUTE 'COMMENT ON TABLE table_source IS $tj$' || $$
{
"version": "1.2.3",
"attribution": "osm",
"description": "a description from table comment",
"foo": {"bar": "foo"}
}
$$::json || '$tj$';
END $do$;
PostgreSQL Function Sources
Function Source is a database function which can be used to
query vector tiles. When started, Martin will look for the functions with
a suitable signature. A function that takes z integer
(or zoom integer
), x integer
, y integer
, and an
optional query json
and returns bytea
, can be used as a Function Source. Alternatively the function could return a
record with a single bytea
field, or a record with two fields of types bytea
and text
, where the text
field is
an etag key (i.e. md5 hash).
Argument | Type | Description |
---|---|---|
z (or zoom) | integer | Tile zoom parameter |
x | integer | Tile x parameter |
y | integer | Tile y parameter |
query (optional, any name) | json | Query string parameters |
Simple Function
For example, if you have a table table_source
in WGS84 (4326
SRID), then you can use this function as a Function
Source:
CREATE OR REPLACE
FUNCTION function_zxy_query(z integer, x integer, y integer)
RETURNS bytea AS $$
DECLARE
mvt bytea;
BEGIN
SELECT INTO mvt ST_AsMVT(tile, 'function_zxy_query', 4096, 'geom') FROM (
SELECT
ST_AsMVTGeom(
ST_Transform(ST_CurveToLine(geom), 3857),
ST_TileEnvelope(z, x, y),
4096, 64, true) AS geom
FROM table_source
WHERE geom && ST_Transform(ST_TileEnvelope(z, x, y), 4326)
) as tile WHERE geom IS NOT NULL;
RETURN mvt;
END
$$ LANGUAGE plpgsql IMMUTABLE STRICT PARALLEL SAFE;
Function with Query Parameters
Users may add a query
parameter to pass additional parameters to the function.
TODO: Modify this example to actually use the query parameters.
CREATE OR REPLACE
FUNCTION function_zxy_query(z integer, x integer, y integer, query_params json)
RETURNS bytea AS $$
DECLARE
mvt bytea;
BEGIN
SELECT INTO mvt ST_AsMVT(tile, 'function_zxy_query', 4096, 'geom') FROM (
SELECT
ST_AsMVTGeom(
ST_Transform(ST_CurveToLine(geom), 3857),
ST_TileEnvelope(z, x, y),
4096, 64, true) AS geom
FROM table_source
WHERE geom && ST_Transform(ST_TileEnvelope(z, x, y), 4326)
) as tile WHERE geom IS NOT NULL;
RETURN mvt;
END
$$ LANGUAGE plpgsql IMMUTABLE STRICT PARALLEL SAFE;
The query_params
argument is a JSON representation of the tile request query params. Query params could be passed as
simple query values, e.g.
curl localhost:3000/function_zxy_query/0/0/0?token=martin
You can also use urlencoded params to encode complex values:
curl \
--data-urlencode 'arrayParam=[1, 2, 3]' \
--data-urlencode 'numberParam=42' \
--data-urlencode 'stringParam=value' \
--data-urlencode 'booleanParam=true' \
--data-urlencode 'objectParam={"answer" : 42}' \
--get localhost:3000/function_zxy_query/0/0/0
then query_params
will be parsed as:
{
"arrayParam": [1, 2, 3],
"numberParam": 42,
"stringParam": "value",
"booleanParam": true,
"objectParam": { "answer": 42 }
}
You can access this params using json operators:
...WHERE answer = (query_params->'objectParam'->>'answer')::int;
Modifying TileJSON
Martin will automatically generate a basic TileJSON manifest for each
function source that will contain the name and description of the function, plus optionally minzoom
, maxzoom
,
and bounds
(if they were specified via one of the configuration methods). For example, if there is a
function public.function_zxy_query_jsonb
, the default TileJSON
might look like this (note that URL will be
automatically adjusted to match the request host):
{
"tilejson": "3.0.0",
"tiles": [
"http://localhost:3111/function_zxy_query_jsonb/{z}/{x}/{y}"
],
"name": "function_zxy_query_jsonb",
"description": "public.function_zxy_query_jsonb"
}
TileJSON in SQL Comments
To modify automatically generated TileJSON
, you can add a valid JSON as an SQL comment on the function. Martin will
merge function comment into the generated TileJSON
using JSON Merge patch.
The following example adds attribution
and version
fields to the TileJSON
.
Note: This example uses EXECUTE
to ensure that the comment is a valid JSON (or else PostgreSQL will throw an
error). You can use other methods of creating SQL comments.
DO $do$ BEGIN
EXECUTE 'COMMENT ON FUNCTION my_function_name IS $tj$' || $$
{
"description": "my new description",
"attribution": "my attribution",
"vector_layers": [
{
"id": "my_layer_id",
"fields": {
"field1": "String",
"field2": "Number"
}
}
]
}
$$::json || '$tj$';
END $do$;
MBTiles and PMTiles File Sources
Martin can serve any type of tiles from PMTile
and MBTile files. To serve a file from CLI, simply put the path to the file or
the directory with *.mbtiles
or *.pmtiles
files. A path to PMTiles file may be a URL. For example:
martin /path/to/mbtiles/file.mbtiles /path/to/directory https://example.org/path/tiles.pmtiles
You may also want to generate a config file using the --save-config my-config.yaml
, and later edit
it and use it with --config my-config.yaml
option.
Composite Sources
Composite Sources allows combining multiple sources into one. Composite Source consists of multiple sources separated by
comma {source1},...,{sourceN}
Each source in a composite source can be accessed with its {source_name}
as a source-layer
property.
Composite source TileJSON endpoint is available
at /{source1},...,{sourceN}
, and tiles are available at /{source1},...,{sourceN}/{z}/{x}/{y}
.
For example, composite source combining points
and lines
sources will be available at /points,lines/{z}/{x}/{y}
# TileJSON
curl localhost:3000/points,lines
# Whole world as a single tile
curl localhost:3000/points,lines/0/0/0
Sprite Sources
Given a directory with SVG images, Martin will generate a sprite – a JSON index and a PNG image, for both low and highresolution displays.
The SVG filenames without extension will be used as the sprites’ image IDs (remember that one sprite and thus sprite_id
contains multiple images).
The images are searched recursively in the given directory, so subdirectory names will be used as prefixes for the image IDs.
For example icons/bicycle.svg
will be available as icons/bicycle
sprite image.
The sprite generation is not yet cached, and may require external reverse proxy or CDN for faster operation. If you would like to improve this, please drop us a pull request.
API
Martin uses MapLibre sprites API specification to serve sprites via several endpoints. The sprite image and index are generated on the fly, so if the sprite directory is updated, the changes will be reflected immediately.
You can use the /catalog
api to see all the <sprite_id>
s with their contained sprites.
Sprite PNG
GET /sprite/<sprite_id>.png
endpoint contains a single PNG sprite image that combines all sources images.
Additionally, there is a high DPI version available at GET /sprite/<sprite_id>@2x.png
.
Sprite index
/sprite/<sprite_id>.json
metadata index describing the position and size of each image inside the sprite. Just like
the PNG, there is a high DPI version available at /sprite/<sprite_id>@2x.json
.
{
"bicycle": {
"height": 15,
"pixelRatio": 1,
"width": 15,
"x": 20,
"y": 16
},
...
}
Coloring at runtime via Signed Distance Fields (SDFs)
If you want to set the color of a sprite at runtime, you will need use the Signed Distance Fields (SDFs)-endpoints.
For example, maplibre does support the image being modified via the icon-color
and icon-halo-color
properties if using SDFs.
SDFs have the significant downside of only allowing one color. If you want multiple colors, you will need to layer icons on top of each other.
The following APIs are available:
/sdf_sprite/<sprite_id>.json
for getting a sprite index as SDF and/sdf_sprite/<sprite_id>.png
for getting sprite PNGs as SDF
Combining Multiple Sprites
Multiple sprite_id
values can be combined into one sprite with the same pattern as for tile
joining: /sprite/<sprite_id1>,<sprite_id2>,...,<sprite_idN>
. No ID renaming is done, so identical sprite names will
override one another.
Configuring from CLI
A sprite directory can be configured from the CLI with the --sprite
flag. The flag can be used multiple times to
configure multiple sprite directories. The sprite_id
of the sprite will be the name of the directory – in the example below,
the sprites will be available at /sprite/sprite_a
and /sprite/sprite_b
. Use --save-config
to save the
configuration to the config file.
martin --sprite /path/to/sprite_a --sprite /path/to/other/sprite_b
Configuring with Config File
A sprite directory can be configured from the config file with the sprite
key, similar to
how MBTiles and PMTiles are configured.
# Sprite configuration
sprites:
paths:
# all SVG files in this directory will be published under the sprite_id "my_images"
- /path/to/my_images
sources:
# SVG images in this directory will be published under the sprite_id "my_sprites"
my_sprites: /path/to/some_dir
The sprites are now avaliable at /sprite/my_images,some_dir.png
/ …
Font Sources
Martin can serve glyph ranges from otf
, ttf
, and ttc
fonts as needed by MapLibre text rendering. Martin will
generate them dynamically on the fly.
The glyph range generation is not yet cached, and may require external reverse proxy or CDN for faster operation.
API
Fonts ranges are available either for a single font, or a combination of multiple fonts. The font names are case-sensitive and should match the font name in the font file as published in the catalog. Make sure to URL-escape font names as they usually contain spaces.
Font Request | |
---|---|
Pattern | /font/{name}/{start}-{end} |
Example | /font/Overpass%20Mono%20Bold/0-255 |
Composite Font Request
When combining multiple fonts, the glyph range will contain glyphs from the first listed font if available, and fallback to the next font if the glyph is not available in the first font, etc. The glyph range will be empty if none of the fonts contain the glyph.
Composite Font Request with fallbacks | |
---|---|
Pattern | /font/{name1},…,{nameN}/{start}-{end} |
Example | /font/Overpass%20Mono%20Bold,Overpass%20Mono%20Light/0-255 |
Catalog
Martin will show all available fonts at the /catalog
endpoint.
curl http://127.0.0.1:3000/catalog
{
"fonts": {
"Overpass Mono Bold": {
"family": "Overpass Mono",
"style": "Bold",
"glyphs": 931,
"start": 0,
"end": 64258
},
"Overpass Mono Light": {
"family": "Overpass Mono",
"style": "Light",
"glyphs": 931,
"start": 0,
"end": 64258
},
"Overpass Mono SemiBold": {
"family": "Overpass Mono",
"style": "SemiBold",
"glyphs": 931,
"start": 0,
"end": 64258
}
}
}
Using from CLI
A font file or directory can be configured from the CLI with one or more --font
parameters.
martin --font /path/to/font/file.ttf --font /path/to/font_dir
Configuring from Config File
A font directory can be configured from the config file with the fonts
key.
# Fonts configuration
fonts:
# A list of *.otf, *.ttf, and *.ttc font files and dirs to search recursively.
- /path/to/font/file.ttf
- /path/to/font_dir
Martin Endpoints
Martin data is available via the HTTP GET
endpoints:
URL | Description |
---|---|
/ | Web UI |
/catalog | List of all sources |
/{sourceID} | Source TileJSON |
/{sourceID}/{z}/{x}/{y} | Map Tiles |
/{source1},…,{sourceN} | Composite Source TileJSON |
/{source1},…,{sourceN}/{z}/{x}/{y} | Composite Source Tiles |
/sprite/{spriteID}[@2x].{json,png} | Sprite sources |
/sdf_sprite/{spriteID}[@2x].{json,png} | SDF Sprite sources |
/font/{font}/{start}-{end} | Font source |
/font/{font1},…,{fontN}/{start}-{end} | Composite Font source |
/health | Martin server health check: returns 200 OK |
Duplicate Source ID
In case there is more than one source that has the same name, e.g. a PG function is available in two
schemas/connections, or a table has more than one geometry columns, sources will be assigned unique IDs such
as /points
, /points.1
, etc.
Reserved Source IDs
Some source IDs are reserved for internal use. If you try to use them, they will be automatically renamed to a unique ID
the same way as duplicate source IDs are handled, e.g. a catalog
source will become catalog.1
.
Some of the reserved IDs: _
, catalog
, config
, font
, health
, help
, index
, manifest
, metrics
, refresh
,
reload
, sprite
, status
.
Catalog
A list of all available sources is available via catalogue endpoint:
curl localhost:3000/catalog | jq
{
"tiles" {
"function_zxy_query": {
"name": "public.function_zxy_query",
"content_type": "application/x-protobuf"
},
"points1": {
"name": "public.points1.geom",
"content_type": "image/webp"
},
...
},
"sprites": {
"cool_icons": {
"images": [
"bicycle",
"bear",
]
},
...
},
"fonts": {
"Noto Mono Regular": {
"family": "Noto Mono",
"style": "Regular",
"glyphs": 875,
"start": 0,
"end": 65533
},
...
}
}
Source TileJSON
All tile sources have a TileJSON endpoint available at the /{SourceID}
.
For example, a points
function or a table will be available as /points
. Composite source combining points
and lines
sources will be available at /points,lines
endpoint.
curl localhost:3000/points | jq
curl localhost:3000/points,lines | jq
Using with MapLibre
MapLibre is an Open-source JavaScript library for showing maps on a website. MapLibre can accept MVT vector tiles generated by Martin, and applies a style to them to draw a map using Web GL.
You can add a layer to the map and specify Martin TileJSON endpoint as a
vector source URL. You should also specify a source-layer
property. For Table Sources it
is {table_name}
by default.
map.addLayer({
id: 'points',
type: 'circle',
source: {
type: 'vector',
url: 'http://localhost:3000/points'
},
'source-layer': 'points',
paint: {
'circle-color': 'red'
},
});
map.addSource('rpc', {
type: 'vector',
url: `http://localhost:3000/function_zxy_query`
});
map.addLayer({
id: 'points',
type: 'circle',
source: 'rpc',
'source-layer': 'function_zxy_query',
paint: {
'circle-color': 'blue'
},
});
You can also combine multiple sources into one source with Composite Sources. Each source in a
composite source can be accessed with its {source_name}
as a source-layer
property.
map.addSource('points', {
type: 'vector',
url: `http://0.0.0.0:3000/points1,points2`
});
map.addLayer({
id: 'red_points',
type: 'circle',
source: 'points',
'source-layer': 'points1',
paint: {
'circle-color': 'red'
}
});
map.addLayer({
id: 'blue_points',
type: 'circle',
source: 'points',
'source-layer': 'points2',
paint: {
'circle-color': 'blue'
}
});
Using with Leaflet
Leaflet is the leading open-source JavaScript library for mobile-friendly interactive maps.
You can add vector tiles using Leaflet.VectorGrid plugin. You must initialize a VectorGrid.Protobuf with a URL template, just like in L.TileLayers. The difference is that you should define the styling for all the features.
L.vectorGrid
.protobuf('http://localhost:3000/points/{z}/{x}/{y}', {
vectorTileLayerStyles: {
'points': {
color: 'red',
fill: true
}
}
})
.addTo(map);
Using with deck.gl
deck.gl is a WebGL-powered framework for visual exploratory data analysis of large datasets.
You can add vector tiles using MVTLayer. MVTLayer data
property defines the remote data for the MVT layer. It can be
String
: Either a URL template or a TileJSON URL.Array
: an array of URL templates. It allows to balance the requests across different tile endpoints. For example, if you define an array with 4 urls and 16 tiles need to be loaded, each endpoint is responsible to server 16/4 tiles.JSON
: A valid TileJSON object.
const pointsLayer = new MVTLayer({
data: 'http://localhost:3000/points',
pointRadiusUnits: 'pixels',
getRadius: 5,
getFillColor: [230, 0, 0]
});
const deckgl = new DeckGL({
container: 'map',
mapStyle: 'https://basemaps.cartocdn.com/gl/dark-matter-gl-style/style.json',
initialViewState: {
latitude: 0,
longitude: 0,
zoom: 1
},
layers: [pointsLayer]
});
Using with Mapbox
Mapbox GL JS is a JavaScript library for interactive, customizable vector maps on the web. Mapbox GL JS v1.x was open source, and it was forked as MapLibre, so using Martin with Mapbox is similar to MapLibre described here. Mapbox GL JS can accept MVT vector tiles generated by Martin, and applies a style to them to draw a map using Web GL.
You can add a layer to the map and specify Martin TileJSON endpoint as a vector source URL. You should also specify
a source-layer
property. For Table Sources it is {table_name}
by default.
map.addLayer({
id: 'points',
type: 'circle',
source: {
type: 'vector',
url: 'http://localhost:3000/points'
},
'source-layer': 'points',
paint: {
'circle-color': 'red'
}
});
Using with OpenLayers
OpenLayers is an open source library for creating interactive maps on the web. Similar to MapLibre GL JS, it can also display image and vector map tiles served by Martin Tile Server.
You can integrate tile services from martin
and OpenLayers
with its VectorTileLayer. Here is an example to add MixPoints
vector tile source to an OpenLayers map.
const layer = new VectorTileLayer({
source: new VectorTileSource({
format: new MVT(),
url: 'http://0.0.0.0:3000/MixPoints/{z}/{x}/{y}',
maxZoom: 14,
}),
});
map.addLayer(layer);
Recipes
Using with DigitalOcean PostgreSQL
You can use Martin with Managed PostgreSQL from DigitalOcean with PostGIS extension
First, you need to download the CA certificate and get your cluster connection string from the dashboard. After that, you can use the connection string and the CA certificate to connect to the database
martin --ca-root-file ./ca-certificate.crt \
postgresql://user:password@host:port/db?sslmode=require
Using with Heroku PostgreSQL
You can use Martin with Managed PostgreSQL from Heroku with PostGIS extension
heroku pg:psql -a APP_NAME -c 'create extension postgis'
Use the same environment variables as Heroku suggests for psql.
export DATABASE_URL=$(heroku config:get DATABASE_URL -a APP_NAME)
export PGSSLCERT=DIRECTORY/PREFIXpostgresql.crt
export PGSSLKEY=DIRECTORY/PREFIXpostgresql.key
export PGSSLROOTCERT=DIRECTORY/PREFIXroot.crt
martin
You may also be able to validate SSL certificate with an explicit sslmode, e.g.
export DATABASE_URL="$(heroku config:get DATABASE_URL -a APP_NAME)?sslmode=verify-ca"
CLI Tools
Martin project contains additional tooling to help manage the data servable with Martin tile server.
martin-cp
martin-cp
is a tool for generating tiles in bulk, and save retrieved tiles into a new or an existing MBTiles file. It can be used to generate tiles for a large area or multiple areas. If multiple areas overlap, it will generate tiles only once. martin-cp
supports the same configuration file and CLI arguments as Martin server, so it can support all sources and even combining sources.
mbtiles
mbtiles
is a small utility to interact with the *.mbtiles
files from the command line. It allows users to examine, copy, validate, compare, and apply diffs between them.
Use mbtiles --help
to see a list of available commands, and mbtiles <command> --help
to see help for a specific command.
This tool can be installed by compiling the latest released version with cargo install mbtiles --locked
, or by downloading a pre-built binary from the releases page.
The mbtiles
utility builds on top of the MBTiles specification. It adds a few additional conventions to ensure that the content of the tile data is valid, and can be used for reliable diffing and patching of the tilesets.
Generating Tiles in Bulk
martin-cp
is a tool for generating tiles in bulk, from any source(s) supported by Martin, and save retrieved tiles
into a new or an existing MBTiles file. martin-cp
can be used to generate tiles for a large area or multiple areas (
bounding boxes). If multiple areas overlap, it will ensure each tile is generated only once. martin-cp
supports the
same configuration file and CLI arguments as Martin server, so it can support all sources and even combining sources.
After copying, martin-cp
will update the agg_tiles_hash
metadata value unless --skip-agg-tiles-hash
is specified.
This allows the MBTiles file to be validated
using mbtiles validate
command.
Usage
This copies tiles from a PostGIS table my_table
into an MBTiles file tileset.mbtiles
using normalized schema, with zoom levels from 0 to 10, and bounds of the whole world.
martin-cp --output-file tileset.mbtiles \
--mbtiles-type normalized \
"--bbox=-180,-90,180,90" \
--min-zoom 0 \
--max-zoom 10 \
--source source_name \
postgresql://postgres@localhost:5432/db
MBTiles information and metadata
summary
Use mbtiles summary
to get a summary of the contents of an MBTiles file. The command will print a table with the
number of tiles per zoom level, the size of the smallest and largest tiles, and the average size of tiles at each zoom
level. The command will also print the bounding box of the covered area per zoom level.
MBTiles file summary for tests/fixtures/mbtiles/world_cities.mbtiles
Schema: flat
File size: 48.00KiB
Page size: 4.00KiB
Page count: 12
Zoom | Count | Smallest | Largest | Average | Bounding Box
0 | 1 | 1.0KiB | 1.0KiB | 1.0KiB | -180,-85,180,85
1 | 4 | 160B | 650B | 366B | -180,-85,180,85
2 | 7 | 137B | 495B | 239B | -180,-67,180,67
3 | 17 | 67B | 246B | 134B | -135,-41,180,67
4 | 38 | 64B | 175B | 86B | -135,-41,180,67
5 | 57 | 64B | 107B | 72B | -124,-41,180,62
6 | 72 | 64B | 97B | 68B | -124,-41,180,62
all | 196 | 64B | 1.0KiB | 96B | -180,-85,180,85
meta-all
Print all metadata values to stdout, as well as the results of tile detection. The format of the values printed is not stable, and should only be used for visual inspection.
mbtiles meta-all my_file.mbtiles
meta-get
Retrieve raw metadata value by its name. The value is printed to stdout without any modifications. For example, to get
the description
value from an mbtiles file:
mbtiles meta-get my_file.mbtiles description
meta-set
Set metadata value by its name, or delete the key if no value is supplied. For example, to set the description
value
to A vector tile dataset
:
mbtiles meta-set my_file.mbtiles description "A vector tile dataset"
MBTiles Schemas
The mbtiles
tool builds on top of the original MBTiles specification by specifying three different kinds of schema for tiles
data: flat
, flat-with-hash
, and normalized
. The mbtiles
tool can convert between these schemas, and can also generate a diff between two files of any schemas, as well as merge multiple schema files into one file.
flat
Flat schema is the closest to the original MBTiles specification. It stores all tiles in a single table. This schema is the most efficient when the tileset contains no duplicate tiles.
CREATE TABLE tiles (
zoom_level INTEGER,
tile_column INTEGER,
tile_row INTEGER,
tile_data BLOB);
CREATE UNIQUE INDEX tile_index on tiles (
zoom_level, tile_column, tile_row);
flat-with-hash
Similar to the flat
schema, but also includes a tile_hash
column that contains a hash value of the tile_data
column. Use this schema when the tileset has no duplicate tiles, but you still want to be able to validate the content of each tile individually.
CREATE TABLE tiles_with_hash (
zoom_level INTEGER NOT NULL,
tile_column INTEGER NOT NULL,
tile_row INTEGER NOT NULL,
tile_data BLOB,
tile_hash TEXT);
CREATE UNIQUE INDEX tiles_with_hash_index on tiles_with_hash (
zoom_level, tile_column, tile_row);
CREATE VIEW tiles AS
SELECT zoom_level, tile_column, tile_row, tile_data
FROM tiles_with_hash;
normalized
Normalized schema is the most efficient when the tileset contains duplicate tiles. It stores all tile blobs in the images
table, and stores the tile Z,X,Y coordinates in a map
table. The map
table contains a tile_id
column that is a foreign key to the images
table. The tile_id
column is a hash of the tile_data
column, making it possible to both validate each individual tile like in the flat-with-hash
schema, and also to optimize storage by storing each unique tile only once.
CREATE TABLE map (
zoom_level INTEGER,
tile_column INTEGER,
tile_row INTEGER,
tile_id TEXT);
CREATE TABLE images (
tile_id TEXT,
tile_data BLOB);
CREATE UNIQUE INDEX map_index ON map (
zoom_level, tile_column, tile_row);
CREATE UNIQUE INDEX images_id ON images (
tile_id);
CREATE VIEW tiles AS
SELECT
map.zoom_level AS zoom_level,
map.tile_column AS tile_column,
map.tile_row AS tile_row,
images.tile_data AS tile_data
FROM
map JOIN images
ON images.tile_id = map.tile_id;
Optionally, .mbtiles
files with normalized
schema can include a tiles_with_hash
view. All normalized
files created by the mbtiles
tool will contain this view.
CREATE VIEW tiles_with_hash AS
SELECT
map.zoom_level AS zoom_level,
map.tile_column AS tile_column,
map.tile_row AS tile_row,
images.tile_data AS tile_data,
images.tile_id AS tile_hash
FROM
map JOIN images
ON map.tile_id = images.tile_id;
Copying, Diffing, and Patching MBTiles
mbtiles copy
Copy command copies an mbtiles file, optionally filtering its content by zoom levels.
mbtiles copy src_file.mbtiles dst_file.mbtiles \
--min-zoom 0 --max-zoom 10
This command can also be used to generate files of different supported schema.
mbtiles copy normalized.mbtiles dst.mbtiles \
--dst-mbttype flat-with-hash
mbtiles copy --diff-with-file
This option is identical to using mbtiles diff ...
. The following commands two are equivalent:
mbtiles diff file1.mbtiles file2.mbtiles diff.mbtiles
mbtiles copy file1.mbtiles diff.mbtiles \
--diff-with-file file2.mbtiles
mbtiles copy --apply-patch
Copy a source file to destination while also applying the diff file generated by copy --diff-with-file
command above
to the destination mbtiles file. This allows safer application of the diff file, as the source file is not modified.
mbtiles copy src_file.mbtiles dst_file.mbtiles \
--apply-patch diff.mbtiles
Diffing MBTiles
mbtiles diff
Copy command can also be used to compare two mbtiles files and generate a delta (diff) file. The diff file can
be applied to the src_file.mbtiles
elsewhere, to avoid copying/transmitting the entire
modified dataset. The delta file will contain all tiles that are different between the two files (modifications,
insertions, and deletions as NULL
values), for both the tile and metadata tables.
There is one exception: agg_tiles_hash
metadata value will be renamed to agg_tiles_hash_after_apply
, and a
new agg_tiles_hash
will be generated for the diff file itself. This is done to avoid confusion when applying the diff
file to the original file, as the agg_tiles_hash
value will be different after the diff is applied. The apply-patch
command will automatically rename the agg_tiles_hash_after_apply
value back to agg_tiles_hash
when applying the
diff.
# This command will compare `file1.mbtiles` and `file2.mbtiles`, and generate a new diff file `diff.mbtiles`.
mbtiles diff file1.mbtiles file2.mbtiles diff.mbtiles
# If diff.mbtiles is applied to file1.mbtiles, it will produce file2.mbtiles
mbtiles apply-patch file1.mbtiles diff.mbtiles file2a.mbtiles
# file2.mbtiles and file2a.mbtiles should now be the same
# Validate both files and see that their hash values are identical
mbtiles validate file2.mbtiles
[INFO ] The agg_tiles_hashes=E95C1081447FB25674DCC1EB97F60C26 has been verified for file2.mbtiles
mbtiles validate file2a.mbtiles
[INFO ] The agg_tiles_hashes=E95C1081447FB25674DCC1EB97F60C26 has been verified for file2a.mbtiles
mbtiles apply-patch
Apply the diff file generated with the mbtiles diff
command above to an MBTiles file. The diff file can be applied to
the src_file.mbtiles
that has been previously downloaded to avoid copying/transmitting the entire modified dataset
again. The src_file.mbtiles
will modified in-place. It is also possible to apply the diff file while copying the
source file to a new destination file, by using
the mbtiles copy --apply-patch
command.
Note that the agg_tiles_hash_after_apply
metadata value will be renamed to agg_tiles_hash
when applying the diff.
This is done to avoid confusion when applying the diff file to the original file, as the agg_tiles_hash
value will be
different after the diff is applied.
mbtiles apply-patch src_file.mbtiles diff_file.mbtiles
Applying diff with SQLite
Another way to apply the diff is to use the sqlite3
command line tool directly. This SQL will delete all tiles
from src_file.mbtiles
that are set to NULL
in diff_file.mbtiles
, and then insert or update all new tiles
from diff_file.mbtiles
into src_file.mbtiles
, where both files are of flat
type. The name of the diff file is
passed as a query parameter to the sqlite3 command line tool, and then used in the SQL statements. Note that this does
not update the agg_tiles_hash
metadata value, so it will be incorrect after the diff is applied.
sqlite3 src_file.mbtiles \
-bail \
-cmd ".parameter set @diffDbFilename diff_file.mbtiles" \
"ATTACH DATABASE @diffDbFilename AS diffDb;" \
"DELETE FROM tiles WHERE (zoom_level, tile_column, tile_row) IN (SELECT zoom_level, tile_column, tile_row FROM diffDb.tiles WHERE tile_data ISNULL);" \
"INSERT OR REPLACE INTO tiles (zoom_level, tile_column, tile_row, tile_data) SELECT * FROM diffDb.tiles WHERE tile_data NOTNULL;"
MBTiles Validation
The original MBTiles specification does not provide any guarantees for
the content of the tile data in MBTiles. mbtiles validate
assumes a few additional conventions and uses them to ensure
that the content of the tile data is valid performing several validation steps. If the file is not valid, the command
will print an error message and exit with a non-zero exit code.
mbtiles validate src_file.mbtiles
SQLite Integrity check
The validate
command will run PRAGMA integrity_check
on the file, and will fail if the result is not ok
.
The --integrity-check
flag can be used to disable this check, or to make it more thorough with full
value. Default
is quick
.
Schema check
The validate
command will verify that the tiles
table/view exists, and that it has the expected columns and indexes.
It will also verify that the metadata
table/view exists, and that it has the expected columns and indexes.
Per-tile validation
If the .mbtiles
file uses flat_with_hash
or normalized schema, the validate
command will verify that the MD5 hash of
the tile_data
column matches the tile_hash
or tile_id
columns (depending on the schema).
A typical Normalized schema generated by tools like tilelive-copy
use MD5 hash in the tile_id
column. The Martin’s mbtiles
tool can use this hash to verify the content of each tile.
We also define a new flat-with-hash schema that stores the hash and tile data in the
same table, allowing per-tile validation without the multiple table layout.
Per-tile validation is not available for the flat
schema, and will be skipped.
Aggregate Content Validation
Per-tile validation will catch individual tile corruption, but it will not detect overall datastore corruption such as
missing tiles, tiles that should not exist, or tiles with incorrect z/x/y values. For that, the mbtiles
tool defines a
new metadata value called agg_tiles_hash
.
The value is computed by hashing the combined value for all rows in the tiles
table/view, ordered by z,x,y. The value
is computed using the following SQL expression, which uses a custom md5_concat_hex
function
from sqlite-hashes crate:
md5_concat_hex(
CAST(zoom_level AS TEXT),
CAST(tile_column AS TEXT),
CAST(tile_row AS TEXT),
tile_data)
In case there are no rows or all are NULL, the hash value of an empty string is used. Note that SQLite allows any value
type to be stored as in any column, so if tile_data
accidentally contains non-blob/text/null value, validation will
fail.
The mbtiles
tool will compute agg_tiles_hash
value when copying or validating mbtiles files. Use --agg-hash update
to force the value to be updated, even if it is incorrect or does not exist.
Development
Clone Martin, setting remote name to upstream
. This way main
branch will be updated automatically with the latest
changes from the upstream repo.
git clone https://github.com/maplibre/martin.git -o upstream
cd martin
Fork Martin repo into your own GitHub account, and add your fork as a remote
git remote add origin _URL_OF_YOUR_FORK_
Install docker and docker-compose
# Ubuntu-based distros have an older version that might also work:
sudo apt install -y docker.io docker-compose
Install a few required libs and tools:
# For Ubuntu-based distros
sudo apt install -y build-essential pkg-config jq file
Install Just (improved makefile processor). Note that some Linux and Homebrew distros have outdated versions of Just, so you should install it from source:
cargo install just --locked
When developing MBTiles SQL code, you may need to use just prepare-sqlite
whenever SQL queries are modified.
Run just
to see all available commands.
Martin as a library
Martin can be used as a standalone server, or as a library in your own Rust application. When used as a library, you can use the following features:
- postgres - enable PostgreSQL/PostGIS tile sources
- pmtiles - enable PMTile tile sources
- mbtiles - enable MBTile tile sources
- fonts - enable font sources
- sprites - enable sprite sources