diff --git a/docs/usage/configuration/config_documentation.md b/docs/usage/configuration/config_documentation.md
index 21dad0ac41..3ad3085bfa 100644
--- a/docs/usage/configuration/config_documentation.md
+++ b/docs/usage/configuration/config_documentation.md
@@ -467,13 +467,13 @@ Sub-options for each listener include:
Valid resource names are:
-* `client`: the client-server API (/_matrix/client), and the synapse admin API (/_synapse/admin). Also implies 'media' and 'static'.
+* `client`: the client-server API (/_matrix/client), and the synapse admin API (/_synapse/admin). Also implies `media` and `static`.
* `consent`: user consent forms (/_matrix/consent). See [here](../../consent_tracking.md) for more.
* `federation`: the server-server API (/_matrix/federation). Also implies `media`, `keys`, `openid`
-* `keys`: the key discovery API (/_matrix/keys).
+* `keys`: the key discovery API (/_matrix/key).
* `media`: the media API (/_matrix/media).
@@ -1119,7 +1119,17 @@ Caching can be configured through the following sub-options:
with intermittent connections, at the cost of higher memory usage.
By default, this is zero, which means that sync responses are not cached
at all.
-
+* `cache_autotuning` and its sub-options `max_cache_memory_usage`, `target_cache_memory_usage`, and
+ `min_cache_ttl` work in conjunction with each other to maintain a balance between cache memory
+ usage and cache entry availability. You must be using [jemalloc](https://github.com/matrix-org/synapse#help-synapse-is-slow-and-eats-all-my-ramcpu)
+ to utilize this option, and all three of the options must be specified for this feature to work.
+ * `max_cache_memory_usage` sets a ceiling on how much memory the cache can use before caches begin to be continuously evicted.
+ They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
+ the flag below, or until the `min_cache_ttl` is hit.
+ * `target_memory_usage` sets a rough target for the desired memory usage of the caches.
+ * `min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
+ caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
+ from being emptied while Synapse is evicting due to memory.
Example configuration:
```yaml
@@ -1127,9 +1137,29 @@ caches:
global_factor: 1.0
per_cache_factors:
get_users_who_share_room_with_user: 2.0
- expire_caches: false
sync_response_cache_duration: 2m
+ cache_autotuning:
+ max_cache_memory_usage: 1024M
+ target_cache_memory_usage: 758M
+ min_cache_ttl: 5m
+```
+
+### Reloading cache factors
+
+The cache factors (i.e. `caches.global_factor` and `caches.per_cache_factors`) may be reloaded at any time by sending a
+[`SIGHUP`](https://en.wikipedia.org/wiki/SIGHUP) signal to Synapse using e.g.
+
+```commandline
+kill -HUP [PID_OF_SYNAPSE_PROCESS]
```
+
+If you are running multiple workers, you must individually update the worker
+config file and send this signal to each worker process.
+
+If you're using the [example systemd service](https://github.com/matrix-org/synapse/blob/develop/contrib/systemd/matrix-synapse.service)
+file in Synapse's `contrib` directory, you can send a `SIGHUP` signal by using
+`systemctl reload matrix-synapse`.
+
---
## Database ##
Config options related to database settings.
@@ -1327,6 +1357,20 @@ This option sets ratelimiting how often invites can be sent in a room or to a
specific user. `per_room` defaults to `per_second: 0.3`, `burst_count: 10` and
`per_user` defaults to `per_second: 0.003`, `burst_count: 5`.
+Client requests that invite user(s) when [creating a
+room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3createroom)
+will count against the `rc_invites.per_room` limit, whereas
+client requests to [invite a single user to a
+room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3roomsroomidinvite)
+will count against both the `rc_invites.per_user` and `rc_invites.per_room` limits.
+
+Federation requests to invite a user will count against the `rc_invites.per_user`
+limit only, as Synapse presumes ratelimiting by room will be done by the sending server.
+
+The `rc_invites.per_user` limit applies to the *receiver* of the invite, rather than the
+sender, meaning that a `rc_invite.per_user.burst_count` of 5 mandates that a single user
+cannot *receive* more than a burst of 5 invites at a time.
+
Example configuration:
```yaml
rc_invites:
@@ -3298,6 +3342,32 @@ room_list_publication_rules:
room_id: "*"
action: allow
```
+
+---
+Config option: `default_power_level_content_override`
+
+The `default_power_level_content_override` option controls the default power
+levels for rooms.
+
+Useful if you know that your users need special permissions in rooms
+that they create (e.g. to send particular types of state events without
+needing an elevated power level). This takes the same shape as the
+`power_level_content_override` parameter in the /createRoom API, but
+is applied before that parameter.
+
+Note that each key provided inside a preset (for example `events` in the example
+below) will overwrite all existing defaults inside that key. So in the example
+below, newly-created private_chat rooms will have no rules for any event types
+except `com.example.foo`.
+
+Example configuration:
+```yaml
+default_power_level_content_override:
+ private_chat: { "events": { "com.example.foo" : 0 } }
+ trusted_private_chat: null
+ public_chat: null
+```
+
---
## Opentracing ##
Configuration options related to Opentracing support.
@@ -3398,7 +3468,7 @@ stream_writers:
typing: worker1
```
---
-Config option: `run_background_task_on`
+Config option: `run_background_tasks_on`
The worker that is used to run background tasks (e.g. cleaning up expired
data). If not provided this defaults to the main process.
|