mirror of
https://github.com/superseriousbusiness/gotosocial.git
synced 2024-11-27 19:01:01 +00:00
[docs] Serve static assets with nginx (#1251)
* [docs] Serve static assets with nginx This explains how to use nginx to serve static assets and offload GTS from that responsibility. It also shows how to have nginx add caching headers to indicate to clients how long they may cache an asset. * [docs] Move additional nginx config to advanced This moves a bunch of additional nginx configuration into the Advanced page instead. It declutters the nginx configuration page.
This commit is contained in:
parent
cb2b2fd805
commit
ce615b5d59
2 changed files with 130 additions and 75 deletions
|
@ -256,3 +256,132 @@ or Systemd service file to enable the profile.
|
||||||
|
|
||||||
If SELinux is available on your system, you can optionally install [SELinux
|
If SELinux is available on your system, you can optionally install [SELinux
|
||||||
policy](https://github.com/lzap/gotosocial-selinux) to further improve security.
|
policy](https://github.com/lzap/gotosocial-selinux) to further improve security.
|
||||||
|
|
||||||
|
## nginx
|
||||||
|
|
||||||
|
This section contains a number of additional things for configuring nginx.
|
||||||
|
|
||||||
|
### Extra Hardening
|
||||||
|
|
||||||
|
If you want to harden up your NGINX deployment with advanced configuration options, there are many guides online for doing so ([for example](https://beaglesecurity.com/blog/article/nginx-server-security.html)). Try to find one that's up to date. Mozilla also publishes best-practice ssl configuration [here](https://ssl-config.mozilla.org/).
|
||||||
|
|
||||||
|
### Caching Webfinger
|
||||||
|
|
||||||
|
It's possible to use nginx to cache the webfinger responses. This may be useful in order to ensure clients still get a response on the webfinger endpoint even if GTS is (temporarily) down.
|
||||||
|
|
||||||
|
You'll need to configure two things:
|
||||||
|
* A cache path
|
||||||
|
* An additional `location` block for webfinger
|
||||||
|
|
||||||
|
First, the cache path which needs to happen in the `http` section, usually inside your `nginx.conf`:
|
||||||
|
|
||||||
|
```nginx.conf
|
||||||
|
http {
|
||||||
|
... there will be other things here ...
|
||||||
|
proxy_cache_path /var/cache/nginx keys_zone=ap_webfinger:10m inactive=1w;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This configures a cache of 10MB whose entries will be kept up to one week if they're not accessed. The zone is named `ap_webfinger` but you can name it whatever you want. 10MB is a lot of cache keys, you can probably use a much smaller value on small instances.
|
||||||
|
|
||||||
|
Second, actually use the cache for webfinger:
|
||||||
|
|
||||||
|
```nginx.conf
|
||||||
|
server {
|
||||||
|
server_name example.org;
|
||||||
|
location /.well-known/webfinger {
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Forwarded-For $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-Proto $scheme;
|
||||||
|
|
||||||
|
proxy_cache ap_webfinger;
|
||||||
|
proxy_cache_background_update on;
|
||||||
|
proxy_cache_key $scheme://$host$uri$is_args$query_string;
|
||||||
|
proxy_cache_valid 200 10m;
|
||||||
|
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504 http_429;
|
||||||
|
proxy_cache_lock on;
|
||||||
|
add_header X-Cache-Status $upstream_cache_status;
|
||||||
|
|
||||||
|
proxy_pass http://localhost:8080;
|
||||||
|
}
|
||||||
|
|
||||||
|
location / {
|
||||||
|
proxy_pass http://localhost:8080/;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header Upgrade $http_upgrade;
|
||||||
|
proxy_set_header Connection "upgrade";
|
||||||
|
proxy_set_header X-Forwarded-For $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-Proto $scheme;
|
||||||
|
}
|
||||||
|
client_max_body_size 40M;
|
||||||
|
|
||||||
|
listen [::]:443 ssl ipv6only=on; # managed by Certbot
|
||||||
|
listen 443 ssl; # managed by Certbot
|
||||||
|
ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem; # managed by Certbot
|
||||||
|
ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem; # managed by Certbot
|
||||||
|
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
|
||||||
|
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `proxy_pass` and `proxy_set_header` are mostly the same, but the `proxy_cache*` entries warrant some explanation:
|
||||||
|
|
||||||
|
* `proxy_cache ap_webfinger` tells it to use the `ap_webfinger` cache zone we previously created. If you named it something else, you should change this value
|
||||||
|
* `proxy_cache_background_update on` means nginx will try and refresh a cached resource that's about to expire in the background, to ensure it has a current copy on disk
|
||||||
|
* `proxy_cache_key` is configured in such a way that it takes the query string into account for caching. So a request for `.well-known/webfinger?acct=user1@example.org` and `.well-known/webfinger?acct=user2@example.org` are not seen as the same
|
||||||
|
* `proxy_cache_valid 200 10m;` means we only cache 200 responses from GTS and for 10 minutes. You can add additional lines of these, like `proxy_cache_valid 404 1m;` to cache 404 responses for 1 minute
|
||||||
|
* `proxy_cache_use_stale` tells nginx it's allowed to use a stale cache entry (so older than 10 minutes) in certain cases
|
||||||
|
* `proxy_cache_lock on` means that if a resource is not cached and there's multiple concurrent requests for them, the queries will be queued up so that only one request goes through and the rest is then answered from cache
|
||||||
|
* `add_header X-Cache-Status $upstream_cache_status` will add an `X-Cache-Status` header to the response so you can check if things are getting cached. You can remove this.
|
||||||
|
|
||||||
|
Tweaking `proxy_cache_use_stale` is how you can ensure webfinger responses are still answered even if GTS itself is down. The provided configuration will serve a stale response in case there's an error proxying to GTS, if our connection to GTS times out, if GTS returns a 5xx status code or if GTS returns 429 (Too Many Requests). The `updating` value says that we're allowed to serve a stale entry if nginx is currently in the process of refreshing its cache. Because we configured `inactive=1w` in the `proxy_cache_path` directive, nginx may serve a response up to one week old if the conditions in `proxy_cache_use_stale` are met.
|
||||||
|
|
||||||
|
### Serving static assets
|
||||||
|
|
||||||
|
By default, GTS will serve assets like the CSS and fonts for the web UI as well as attachments for statuses. However it's very simple to have nginx do this instead and offload GTS from that responsibility. Nginx can generally do a faster job at this too since it's able to use newer functionality in the OS that the Go runtime hasn't necessarily adopted yet.
|
||||||
|
|
||||||
|
There are 2 paths that nginx can handle for us:
|
||||||
|
* `/assets` which contains fonts, CSS, images etc. for the web UI
|
||||||
|
* `/fileserver` which serves attachments for status posts when using the local storage backend
|
||||||
|
|
||||||
|
For `/assets` we'll need the value of `web-asset-base-dir` from the configuration, and for `/fileserver` we'll want `storage-local-base-path`. You can then adjust your nginx configuration like this:
|
||||||
|
|
||||||
|
```nginx.conf
|
||||||
|
server {
|
||||||
|
server_name example.org;
|
||||||
|
location /assets/ {
|
||||||
|
alias web-asset-base-dir/;
|
||||||
|
autoindex off;
|
||||||
|
expires 5m;
|
||||||
|
add_header Cache-Control "public";
|
||||||
|
}
|
||||||
|
|
||||||
|
location /fileserver/ {
|
||||||
|
alias storage-local-base-path/;
|
||||||
|
autoindex off;
|
||||||
|
expires max;
|
||||||
|
add_header Cache-Control "public, immutable";
|
||||||
|
}
|
||||||
|
|
||||||
|
location / {
|
||||||
|
proxy_pass http://localhost:8080/;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header Upgrade $http_upgrade;
|
||||||
|
proxy_set_header Connection "upgrade";
|
||||||
|
proxy_set_header X-Forwarded-For $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-Proto $scheme;
|
||||||
|
}
|
||||||
|
client_max_body_size 40M;
|
||||||
|
|
||||||
|
listen [::]:443 ssl ipv6only=on; # managed by Certbot
|
||||||
|
listen 443 ssl; # managed by Certbot
|
||||||
|
ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem; # managed by Certbot
|
||||||
|
ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem; # managed by Certbot
|
||||||
|
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
|
||||||
|
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The trailing slashes in the new `location` directives and the `alias` are significant, do not remove those. The `expires` directive adds the necessary headers to inform the client how long it may cache the resource. For assets, which may change on each release, 5 minutes is used in this example. For attachments, which should never change once they're created, `max` is used instead setting the cache expiry to the 31st of December 2037. For other options, see the nginx documentation on the [`expires` directive](https://nginx.org/en/docs/http/ngx_http_headers_module.html#expires). Nginx does not add cache headers to 4xx or 5xx response codes so a failure to fetch an asset won't get cached by clients. The `autoindex off` directive tells nginx to not serve a directory listing. This should be the default but it doesn't hurt to be explicit. The added `add_header` lines set additional options for the `Cache-Control` header:
|
||||||
|
* `public` is used to indicate that anyone may cache this resource
|
||||||
|
* `immutable` is used to indicate this resource will never change while it is fresh (it's before the end of the expires) allowing clients to forego conditional requests to revalidate the resource during that timespan
|
||||||
|
|
|
@ -181,78 +181,4 @@ server {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Extra Hardening
|
A number of additional configurations for nginx, including static asset serving and caching, are documented in the [Advanced](advanced.md) section of our documentation.
|
||||||
|
|
||||||
If you want to harden up your NGINX deployment with advanced configuration options, there are many guides online for doing so ([for example](https://beaglesecurity.com/blog/article/nginx-server-security.html)). Try to find one that's up to date. Mozilla also publishes best-practice ssl configuration [here](https://ssl-config.mozilla.org/).
|
|
||||||
|
|
||||||
## Caching Webfinger
|
|
||||||
|
|
||||||
It's possible to use nginx to cache the webfinger responses. This may be useful in order to ensure clients still get a response on the webfinger endpoint even if GTS is (temporarily) down.
|
|
||||||
|
|
||||||
You'll need to configure two things:
|
|
||||||
* A cache path
|
|
||||||
* An additional `location` block for webfinger
|
|
||||||
|
|
||||||
First, the cache path which needs to happen in the `http` section, usually inside your `nginx.conf`:
|
|
||||||
|
|
||||||
```nginx.conf
|
|
||||||
http {
|
|
||||||
... there will be other things here ...
|
|
||||||
proxy_cache_path /var/cache/nginx keys_zone=ap_webfinger:10m inactive=1w;
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
This configures a cache of 10MB whose entries will be kept up to one week if they're not accessed. The zone is named `ap_webfinger` but you can name it whatever you want. 10MB is a lot of cache keys, you can probably use a much smaller value on small instances.
|
|
||||||
|
|
||||||
Second, actually use the cache for webfinger:
|
|
||||||
|
|
||||||
```nginx.conf
|
|
||||||
server {
|
|
||||||
server_name example.org;
|
|
||||||
location /.well-known/webfinger {
|
|
||||||
proxy_set_header Host $host;
|
|
||||||
proxy_set_header X-Forwarded-For $remote_addr;
|
|
||||||
proxy_set_header X-Forwarded-Proto $scheme;
|
|
||||||
|
|
||||||
proxy_cache ap_webfinger;
|
|
||||||
proxy_cache_background_update on;
|
|
||||||
proxy_cache_key $scheme://$host$uri$is_args$query_string;
|
|
||||||
proxy_cache_valid 200 10m;
|
|
||||||
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504 http_429;
|
|
||||||
proxy_cache_lock on;
|
|
||||||
add_header X-Cache-Status $upstream_cache_status;
|
|
||||||
|
|
||||||
proxy_pass http://localhost:8080;
|
|
||||||
}
|
|
||||||
|
|
||||||
location / {
|
|
||||||
proxy_pass http://localhost:8080/;
|
|
||||||
proxy_set_header Host $host;
|
|
||||||
proxy_set_header Upgrade $http_upgrade;
|
|
||||||
proxy_set_header Connection "upgrade";
|
|
||||||
proxy_set_header X-Forwarded-For $remote_addr;
|
|
||||||
proxy_set_header X-Forwarded-Proto $scheme;
|
|
||||||
}
|
|
||||||
client_max_body_size 40M;
|
|
||||||
|
|
||||||
listen [::]:443 ssl ipv6only=on; # managed by Certbot
|
|
||||||
listen 443 ssl; # managed by Certbot
|
|
||||||
ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem; # managed by Certbot
|
|
||||||
ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem; # managed by Certbot
|
|
||||||
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
|
|
||||||
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
The `proxy_pass` and `proxy_set_header` are mostly the same, but the `proxy_cache*` entries warrant some explanation:
|
|
||||||
|
|
||||||
* `proxy_cache ap_webfinger` tells it to use the `ap_webfinger` cache zone we previously created. If you named it something else, you should change this value
|
|
||||||
* `proxy_cache_background_update on` means nginx will try and refresh a cached resource that's about to expire in the background, to ensure it has a current copy on disk
|
|
||||||
* `proxy_cache_key` is configured in such a way that it takes the query string into account for caching. So a request for `.well-known/webfinger?acct=user1@example.org` and `.well-known/webfinger?acct=user2@example.org` are not seen as the same
|
|
||||||
* `proxy_cache_valid 200 10m;` means we only cache 200 responses from GTS and for 10 minutes. You can add additional lines of these, like `proxy_cache_valid 404 1m;` to cache 404 responses for 1 minute
|
|
||||||
* `proxy_cache_use_stale` tells nginx it's allowed to use a stale cache entry (so older than 10 minutes) in certain cases
|
|
||||||
* `proxy_cache_lock on` means that if a resource is not cached and there's multiple concurrent requests for them, the queries will be queued up so that only one request goes through and the rest is then answered from cache
|
|
||||||
* `add_header X-Cache-Status $upstream_cache_status` will add an `X-Cache-Status` header to the response so you can check if things are getting cached. You can remove this.
|
|
||||||
|
|
||||||
Tweaking `proxy_cache_use_stale` is how you can ensure webfinger responses are still answered even if GTS itself is down. The provided configuration will serve a stale response in case there's an error proxying to GTS, if our connection to GTS times out, if GTS returns a 5xx status code or if GTS returns 429 (Too Many Requests). The `updating` value says that we're allowed to serve a stale entry if nginx is currently in the process of refreshing its cache. Because we configured `inactive=1w` in the `proxy_cache_path` directive, nginx may serve a response up to one week old if the conditions in `proxy_cache_use_stale` are met.
|
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue