summary refs log tree commit diff
path: root/docs/usage/configuration
diff options
context:
space:
mode:
Diffstat (limited to 'docs/usage/configuration')
-rw-r--r--docs/usage/configuration/application_services.md35
-rw-r--r--docs/usage/configuration/consent_tracking.md197
-rw-r--r--docs/usage/configuration/json_web_tokens.md97
-rw-r--r--docs/usage/configuration/message_retention_policies.md205
-rw-r--r--docs/usage/configuration/registration_captcha.md37
-rw-r--r--docs/usage/configuration/server_notices.md61
-rw-r--r--docs/usage/configuration/structured_logging.md161
-rw-r--r--docs/usage/configuration/templates.md239
-rw-r--r--docs/usage/configuration/user_authentication/password_auth_providers.md129
-rw-r--r--docs/usage/configuration/user_authentication/single_sign_on/openid.md572
-rw-r--r--docs/usage/configuration/user_authentication/single_sign_on/sso_mapping_providers.md197
-rw-r--r--docs/usage/configuration/user_directory.md49
-rw-r--r--docs/usage/configuration/workers/README.md560
-rw-r--r--docs/usage/configuration/workers/synctl_workers.md36
14 files changed, 2575 insertions, 0 deletions
diff --git a/docs/usage/configuration/application_services.md b/docs/usage/configuration/application_services.md
new file mode 100644
index 0000000000..e4592010a2
--- /dev/null
+++ b/docs/usage/configuration/application_services.md
@@ -0,0 +1,35 @@
+# Registering an Application Service
+
+The registration of new application services depends on the homeserver used. 
+In synapse, you need to create a new configuration file for your AS and add it
+to the list specified under the `app_service_config_files` config
+option in your synapse config.
+
+For example:
+
+```yaml
+app_service_config_files:
+- /home/matrix/.synapse/<your-AS>.yaml
+```
+
+The format of the AS configuration file is as follows:
+
+```yaml
+url: <base url of AS>
+as_token: <token AS will add to requests to HS>
+hs_token: <token HS will add to requests to AS>
+sender_localpart: <localpart of AS user>
+namespaces:
+  users:  # List of users we're interested in
+    - exclusive: <bool>
+      regex: <regex>
+      group_id: <group>
+    - ...
+  aliases: []  # List of aliases we're interested in
+  rooms: [] # List of room ids we're interested in
+```
+
+`exclusive`: If enabled, only this application service is allowed to register users in its namespace(s).
+`group_id`: All users of this application service are dynamically joined to this group. This is useful for e.g user organisation or flairs.
+
+See the [spec](https://matrix.org/docs/spec/application_service/unstable.html) for further details on how application services work.
diff --git a/docs/usage/configuration/consent_tracking.md b/docs/usage/configuration/consent_tracking.md
new file mode 100644
index 0000000000..fb1fec80fe
--- /dev/null
+++ b/docs/usage/configuration/consent_tracking.md
@@ -0,0 +1,197 @@
+Support in Synapse for tracking agreement to server terms and conditions
+========================================================================
+
+Synapse 0.30 introduces support for tracking whether users have agreed to the
+terms and conditions set by the administrator of a server - and blocking access
+to the server until they have.
+
+There are several parts to this functionality; each requires some specific
+configuration in `homeserver.yaml` to be enabled.
+
+Note that various parts of the configuation and this document refer to the
+"privacy policy": agreement with a privacy policy is one particular use of this
+feature, but of course adminstrators can specify other terms and conditions
+unrelated to "privacy" per se.
+
+Collecting policy agreement from a user
+---------------------------------------
+
+Synapse can be configured to serve the user a simple policy form with an
+"accept" button. Clicking "Accept" records the user's acceptance in the
+database and shows a success page.
+
+To enable this, first create templates for the policy and success pages.
+These should be stored on the local filesystem.
+
+These templates use the [Jinja2](http://jinja.pocoo.org) templating language,
+and [docs/privacy_policy_templates](https://github.com/matrix-org/synapse/tree/develop/docs/privacy_policy_templates/)
+gives examples of the sort of thing that can be done.
+
+Note that the templates must be stored under a name giving the language of the
+template - currently this must always be `en` (for "English");
+internationalisation support is intended for the future.
+
+The template for the policy itself should be versioned and named according to
+the version: for example `1.0.html`. The version of the policy which the user
+has agreed to is stored in the database.
+
+Once the templates are in place, make the following changes to `homeserver.yaml`:
+
+ 1. Add a `user_consent` section, which should look like:
+
+    ```yaml
+    user_consent:
+      template_dir: privacy_policy_templates
+      version: 1.0
+    ```
+
+    `template_dir` points to the directory containing the policy
+    templates. `version` defines the version of the policy which will be served
+    to the user. In the example above, Synapse will serve
+    `privacy_policy_templates/en/1.0.html`.
+
+
+ 2. Add a `form_secret` setting at the top level:
+
+
+    ```yaml
+    form_secret: "<unique secret>"
+    ```
+
+    This should be set to an arbitrary secret string (try `pwgen -y 30` to
+    generate suitable secrets).
+
+    More on what this is used for below.
+
+ 3. Add `consent` wherever the `client` resource is currently enabled in the
+    `listeners` configuration. For example:
+
+    ```yaml
+    listeners:
+      - port: 8008
+        resources:
+          - names:
+            - client
+            - consent
+    ```
+
+
+Finally, ensure that `jinja2` is installed. If you are using a virtualenv, this
+should be a matter of `pip install Jinja2`. On debian, try `apt-get install
+python-jinja2`.
+
+Once this is complete, and the server has been restarted, try visiting
+`https://<server>/_matrix/consent`. If correctly configured, this should give
+an error "Missing string query parameter 'u'". It is now possible to manually
+construct URIs where users can give their consent.
+
+### Enabling consent tracking at registration
+
+1. Add the following to your configuration:
+
+   ```yaml
+   user_consent:
+     require_at_registration: true
+     policy_name: "Privacy Policy" # or whatever you'd like to call the policy
+   ```
+
+2. In your consent templates, make use of the `public_version` variable to
+   see if an unauthenticated user is viewing the page. This is typically
+   wrapped around the form that would be used to actually agree to the document:
+
+   ```html
+   {% if not public_version %}
+     <!-- The variables used here are only provided when the 'u' param is given to the homeserver -->
+     <form method="post" action="consent">
+       <input type="hidden" name="v" value="{{version}}"/>
+       <input type="hidden" name="u" value="{{user}}"/>
+       <input type="hidden" name="h" value="{{userhmac}}"/>
+       <input type="submit" value="Sure thing!"/>
+     </form>
+   {% endif %}
+   ```
+
+3. Restart Synapse to apply the changes.
+
+Visiting `https://<server>/_matrix/consent` should now give you a view of the privacy
+document. This is what users will be able to see when registering for accounts.
+
+### Constructing the consent URI
+
+It may be useful to manually construct the "consent URI" for a given user - for
+instance, in order to send them an email asking them to consent. To do this,
+take the base `https://<server>/_matrix/consent` URL and add the following
+query parameters:
+
+ * `u`: the user id of the user. This can either be a full MXID
+   (`@user:server.com`) or just the localpart (`user`).
+
+ * `h`: hex-encoded HMAC-SHA256 of `u` using the `form_secret` as a key. It is
+   possible to calculate this on the commandline with something like:
+
+   ```bash
+   echo -n '<user>' | openssl sha256 -hmac '<form_secret>'
+   ```
+
+   This should result in a URI which looks something like:
+   `https://<server>/_matrix/consent?u=<user>&h=68a152465a4d...`.
+
+
+Note that not providing a `u` parameter will be interpreted as wanting to view
+the document from an unauthenticated perspective, such as prior to registration.
+Therefore, the `h` parameter is not required in this scenario. To enable this
+behaviour, set `require_at_registration` to `true` in your `user_consent` config.
+
+
+Sending users a server notice asking them to agree to the policy
+----------------------------------------------------------------
+
+It is possible to configure Synapse to send a [server
+notice](server_notices.md) to anybody who has not yet agreed to the current
+version of the policy. To do so:
+
+ * ensure that the consent resource is configured, as in the previous section
+
+ * ensure that server notices are configured, as in [the server notice documentation](server_notices.md).
+
+ * Add `server_notice_content` under `user_consent` in `homeserver.yaml`. For
+   example:
+
+   ```yaml
+   user_consent:
+     server_notice_content:
+       msgtype: m.text
+       body: >-
+         Please give your consent to the privacy policy at %(consent_uri)s.
+   ```
+
+   Synapse automatically replaces the placeholder `%(consent_uri)s` with the
+   consent uri for that user.
+
+ * ensure that `public_baseurl` is set in `homeserver.yaml`, and gives the base
+   URI that clients use to connect to the server. (It is used to construct
+   `consent_uri` in the server notice.)
+
+
+Blocking users from using the server until they agree to the policy
+-------------------------------------------------------------------
+
+Synapse can be configured to block any attempts to join rooms or send messages
+until the user has given their agreement to the policy. (Joining the server
+notices room is exempted from this).
+
+To enable this, add `block_events_error` under `user_consent`. For example:
+
+```yaml
+user_consent:
+  block_events_error: >-
+    You can't send any messages until you consent to the privacy policy at
+    %(consent_uri)s.
+```
+
+Synapse automatically replaces the placeholder `%(consent_uri)s` with the
+consent uri for that user.
+
+ensure that `public_baseurl` is set in `homeserver.yaml`, and gives the base
+URI that clients use to connect to the server. (It is used to construct
+`consent_uri` in the error.)
diff --git a/docs/usage/configuration/json_web_tokens.md b/docs/usage/configuration/json_web_tokens.md
new file mode 100644
index 0000000000..5be9fd26e3
--- /dev/null
+++ b/docs/usage/configuration/json_web_tokens.md
@@ -0,0 +1,97 @@
+# JWT Login Type
+
+Synapse comes with a non-standard login type to support
+[JSON Web Tokens](https://en.wikipedia.org/wiki/JSON_Web_Token). In general the
+documentation for
+[the login endpoint](https://matrix.org/docs/spec/client_server/r0.6.1#login)
+is still valid (and the mechanism works similarly to the
+[token based login](https://matrix.org/docs/spec/client_server/r0.6.1#token-based)).
+
+To log in using a JSON Web Token, clients should submit a `/login` request as
+follows:
+
+```json
+{
+  "type": "org.matrix.login.jwt",
+  "token": "<jwt>"
+}
+```
+
+Note that the login type of `m.login.jwt` is supported, but is deprecated. This
+will be removed in a future version of Synapse.
+
+The `token` field should include the JSON web token with the following claims:
+
+* The `sub` (subject) claim is required and should encode the local part of the
+  user ID.
+* The expiration time (`exp`), not before time (`nbf`), and issued at (`iat`)
+  claims are optional, but validated if present.
+* The issuer (`iss`) claim is optional, but required and validated if configured.
+* The audience (`aud`) claim is optional, but required and validated if configured.
+  Providing the audience claim when not configured will cause validation to fail.
+
+In the case that the token is not valid, the homeserver must respond with
+`403 Forbidden` and an error code of `M_FORBIDDEN`.
+
+As with other login types, there are additional fields (e.g. `device_id` and
+`initial_device_display_name`) which can be included in the above request.
+
+## Preparing Synapse
+
+The JSON Web Token integration in Synapse uses the
+[`PyJWT`](https://pypi.org/project/pyjwt/) library, which must be installed
+as follows:
+
+ * The relevant libraries are included in the Docker images and Debian packages
+   provided by `matrix.org` so no further action is needed.
+
+ * If you installed Synapse into a virtualenv, run `/path/to/env/bin/pip
+   install synapse[pyjwt]` to install the necessary dependencies.
+
+ * For other installation mechanisms, see the documentation provided by the
+   maintainer.
+
+To enable the JSON web token integration, you should then add an `jwt_config` section
+to your configuration file (or uncomment the `enabled: true` line in the
+existing section). See [sample_config.yaml](./sample_config.yaml) for some
+sample settings.
+
+## How to test JWT as a developer
+
+Although JSON Web Tokens are typically generated from an external server, the
+examples below use [PyJWT](https://pyjwt.readthedocs.io/en/latest/) directly.
+
+1.  Configure Synapse with JWT logins, note that this example uses a pre-shared
+    secret and an algorithm of HS256:
+
+    ```yaml
+    jwt_config:
+        enabled: true
+        secret: "my-secret-token"
+        algorithm: "HS256"
+    ```
+2.  Generate a JSON web token:
+
+    ```bash
+    $ pyjwt --key=my-secret-token --alg=HS256 encode sub=test-user
+    eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.Ag71GT8v01UO3w80aqRPTeuVPBIBZkYhNTJJ-_-zQIc
+    ```
+3.  Query for the login types and ensure `org.matrix.login.jwt` is there:
+
+    ```bash
+    curl http://localhost:8080/_matrix/client/r0/login
+    ```
+4.  Login used the generated JSON web token from above:
+
+    ```bash
+    $ curl http://localhost:8082/_matrix/client/r0/login -X POST \
+        --data '{"type":"org.matrix.login.jwt","token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.Ag71GT8v01UO3w80aqRPTeuVPBIBZkYhNTJJ-_-zQIc"}'
+    {
+        "access_token": "<access token>",
+        "device_id": "ACBDEFGHI",
+        "home_server": "localhost:8080",
+        "user_id": "@test-user:localhost:8480"
+    }
+    ```
+
+You should now be able to use the returned access token to query the client API.
diff --git a/docs/usage/configuration/message_retention_policies.md b/docs/usage/configuration/message_retention_policies.md
new file mode 100644
index 0000000000..9214d6d7e9
--- /dev/null
+++ b/docs/usage/configuration/message_retention_policies.md
@@ -0,0 +1,205 @@
+# Message retention policies
+
+Synapse admins can enable support for message retention policies on
+their homeserver. Message retention policies exist at a room level,
+follow the semantics described in
+[MSC1763](https://github.com/matrix-org/matrix-doc/blob/matthew/msc1763/proposals/1763-configurable-retention-periods.md),
+and allow server and room admins to configure how long messages should
+be kept in a homeserver's database before being purged from it.
+**Please note that, as this feature isn't part of the Matrix
+specification yet, this implementation is to be considered as
+experimental.** 
+
+A message retention policy is mainly defined by its `max_lifetime`
+parameter, which defines how long a message can be kept around after
+it was sent to the room. If a room doesn't have a message retention
+policy, and there's no default one for a given server, then no message
+sent in that room is ever purged on that server.
+
+MSC1763 also specifies semantics for a `min_lifetime` parameter which
+defines the amount of time after which an event _can_ get purged (after
+it was sent to the room), but Synapse doesn't currently support it
+beyond registering it.
+
+Both `max_lifetime` and `min_lifetime` are optional parameters.
+
+Note that message retention policies don't apply to state events.
+
+Once an event reaches its expiry date (defined as the time it was sent
+plus the value for `max_lifetime` in the room), two things happen:
+
+* Synapse stops serving the event to clients via any endpoint.
+* The message gets picked up by the next purge job (see the "Purge jobs"
+  section) and is removed from Synapse's database.
+
+Since purge jobs don't run continuously, this means that an event might
+stay in a server's database for longer than the value for `max_lifetime`
+in the room would allow, though hidden from clients.
+
+Similarly, if a server (with support for message retention policies
+enabled) receives from another server an event that should have been
+purged according to its room's policy, then the receiving server will
+process and store that event until it's picked up by the next purge job,
+though it will always hide it from clients.
+
+Synapse requires at least one message in each room, so it will never
+delete the last message in a room. It will, however, hide it from
+clients.
+
+
+## Server configuration
+
+Support for this feature can be enabled and configured in the
+`retention` section of the Synapse configuration file (see the
+[sample file](https://github.com/matrix-org/synapse/blob/v1.36.0/docs/sample_config.yaml#L451-L518)).
+
+To enable support for message retention policies, set the setting
+`enabled` in this section to `true`.
+
+
+### Default policy
+
+A default message retention policy is a policy defined in Synapse's
+configuration that is used by Synapse for every room that doesn't have a
+message retention policy configured in its state. This allows server
+admins to ensure that messages are never kept indefinitely in a server's
+database. 
+
+A default policy can be defined as such, in the `retention` section of
+the configuration file:
+
+```yaml
+default_policy:
+  min_lifetime: 1d
+  max_lifetime: 1y
+```
+
+Here, `min_lifetime` and `max_lifetime` have the same meaning and level
+of support as previously described. They can be expressed either as a
+duration (using the units `s` (seconds), `m` (minutes), `h` (hours),
+`d` (days), `w` (weeks) and `y` (years)) or as a number of milliseconds.
+
+
+### Purge jobs
+
+Purge jobs are the jobs that Synapse runs in the background to purge
+expired events from the database. They are only run if support for
+message retention policies is enabled in the server's configuration. If
+no configuration for purge jobs is configured by the server admin,
+Synapse will use a default configuration, which is described in the
+[sample configuration file](https://github.com/matrix-org/synapse/blob/v1.36.0/docs/sample_config.yaml#L451-L518).
+
+Some server admins might want a finer control on when events are removed
+depending on an event's room's policy. This can be done by setting the
+`purge_jobs` sub-section in the `retention` section of the configuration
+file. An example of such configuration could be:
+
+```yaml
+purge_jobs:
+  - longest_max_lifetime: 3d
+    interval: 12h
+  - shortest_max_lifetime: 3d
+    longest_max_lifetime: 1w
+    interval: 1d
+  - shortest_max_lifetime: 1w
+    interval: 2d
+```
+
+In this example, we define three jobs:
+
+* one that runs twice a day (every 12 hours) and purges events in rooms
+  which policy's `max_lifetime` is lower or equal to 3 days.
+* one that runs once a day and purges events in rooms which policy's
+  `max_lifetime` is between 3 days and a week.
+* one that runs once every 2 days and purges events in rooms which
+  policy's `max_lifetime` is greater than a week.
+
+Note that this example is tailored to show different configurations and
+features slightly more jobs than it's probably necessary (in practice, a
+server admin would probably consider it better to replace the two last
+jobs with one that runs once a day and handles rooms which which
+policy's `max_lifetime` is greater than 3 days).
+
+Keep in mind, when configuring these jobs, that a purge job can become
+quite heavy on the server if it targets many rooms, therefore prefer
+having jobs with a low interval that target a limited set of rooms. Also
+make sure to include a job with no minimum and one with no maximum to
+make sure your configuration handles every policy.
+
+As previously mentioned in this documentation, while a purge job that
+runs e.g. every day means that an expired event might stay in the
+database for up to a day after its expiry, Synapse hides expired events
+from clients as soon as they expire, so the event is not visible to
+local users between its expiry date and the moment it gets purged from
+the server's database.
+
+
+### Lifetime limits
+
+Server admins can set limits on the values of `max_lifetime` to use when
+purging old events in a room. These limits can be defined as such in the
+`retention` section of the configuration file:
+
+```yaml
+allowed_lifetime_min: 1d
+allowed_lifetime_max: 1y
+```
+
+The limits are considered when running purge jobs. If necessary, the
+effective value of `max_lifetime` will be brought between
+`allowed_lifetime_min` and `allowed_lifetime_max` (inclusive).
+This means that, if the value of `max_lifetime` defined in the room's state
+is lower than `allowed_lifetime_min`, the value of `allowed_lifetime_min`
+will be used instead. Likewise, if the value of `max_lifetime` is higher
+than `allowed_lifetime_max`, the value of `allowed_lifetime_max` will be
+used instead.
+
+In the example above, we ensure Synapse never deletes events that are less
+than one day old, and that it always deletes events that are over a year
+old.
+
+If a default policy is set, and its `max_lifetime` value is lower than
+`allowed_lifetime_min` or higher than `allowed_lifetime_max`, the same
+process applies.
+
+Both parameters are optional; if one is omitted Synapse won't use it to
+adjust the effective value of `max_lifetime`.
+
+Like other settings in this section, these parameters can be expressed
+either as a duration or as a number of milliseconds.
+
+
+## Room configuration
+
+To configure a room's message retention policy, a room's admin or
+moderator needs to send a state event in that room with the type
+`m.room.retention` and the following content:
+
+```json
+{
+    "max_lifetime": ...
+}
+```
+
+In this event's content, the `max_lifetime` parameter has the same
+meaning as previously described, and needs to be expressed in
+milliseconds. The event's content can also include a `min_lifetime`
+parameter, which has the same meaning and limited support as previously
+described.
+
+Note that over every server in the room, only the ones with support for
+message retention policies will actually remove expired events. This
+support is currently not enabled by default in Synapse.
+
+
+## Note on reclaiming disk space
+
+While purge jobs actually delete data from the database, the disk space
+used by the database might not decrease immediately on the database's
+host. However, even though the database engine won't free up the disk
+space, it will start writing new data into where the purged data was.
+
+If you want to reclaim the freed disk space anyway and return it to the
+operating system, the server admin needs to run `VACUUM FULL;` (or
+`VACUUM;` for SQLite databases) on Synapse's database (see the related
+[PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-vacuum.html)).
diff --git a/docs/usage/configuration/registration_captcha.md b/docs/usage/configuration/registration_captcha.md
new file mode 100644
index 0000000000..49419ce8df
--- /dev/null
+++ b/docs/usage/configuration/registration_captcha.md
@@ -0,0 +1,37 @@
+# Overview
+A captcha can be enabled on your homeserver to help prevent bots from registering
+accounts. Synapse currently uses Google's reCAPTCHA service which requires API keys
+from Google.
+
+## Getting API keys
+
+1. Create a new site at <https://www.google.com/recaptcha/admin/create>
+1. Set the label to anything you want
+1. Set the type to reCAPTCHA v2 using the "I'm not a robot" Checkbox option.
+This is the only type of captcha that works with Synapse.
+1. Add the public hostname for your server, as set in `public_baseurl`
+in `homeserver.yaml`, to the list of authorized domains. If you have not set
+`public_baseurl`, use `server_name`.
+1. Agree to the terms of service and submit.
+1. Copy your site key and secret key and add them to your `homeserver.yaml`
+configuration file
+    ```yaml
+    recaptcha_public_key: YOUR_SITE_KEY
+    recaptcha_private_key: YOUR_SECRET_KEY
+    ```
+1. Enable the CAPTCHA for new registrations
+    ```yaml
+    enable_registration_captcha: true
+    ```
+1. Go to the settings page for the CAPTCHA you just created
+1. Uncheck the "Verify the origin of reCAPTCHA solutions" checkbox so that the
+captcha can be displayed in any client. If you do not disable this option then you
+must specify the domains of every client that is allowed to display the CAPTCHA.
+
+## Configuring IP used for auth
+
+The reCAPTCHA API requires that the IP address of the user who solved the
+CAPTCHA is sent. If the client is connecting through a proxy or load balancer,
+it may be required to use the `X-Forwarded-For` (XFF) header instead of the origin
+IP address. This can be configured using the `x_forwarded` directive in the
+listeners section of the `homeserver.yaml` configuration file.
diff --git a/docs/usage/configuration/server_notices.md b/docs/usage/configuration/server_notices.md
new file mode 100644
index 0000000000..339d10a0ab
--- /dev/null
+++ b/docs/usage/configuration/server_notices.md
@@ -0,0 +1,61 @@
+# Server Notices
+
+'Server Notices' are a new feature introduced in Synapse 0.30. They provide a
+channel whereby server administrators can send messages to users on the server.
+
+They are used as part of communication of the server polices (see
+[Consent Tracking](consent_tracking.md)), however the intention is that
+they may also find a use for features such as "Message of the day".
+
+This is a feature specific to Synapse, but it uses standard Matrix
+communication mechanisms, so should work with any Matrix client.
+
+## User experience
+
+When the user is first sent a server notice, they will get an invitation to a
+room (typically called 'Server Notices', though this is configurable in
+`homeserver.yaml`). They will be **unable to reject** this invitation -
+attempts to do so will receive an error.
+
+Once they accept the invitation, they will see the notice message in the room
+history; it will appear to have come from the 'server notices user' (see
+below).
+
+The user is prevented from sending any messages in this room by the power
+levels.
+
+Having joined the room, the user can leave the room if they want. Subsequent
+server notices will then cause a new room to be created.
+
+## Synapse configuration
+
+Server notices come from a specific user id on the server. Server
+administrators are free to choose the user id - something like `server` is
+suggested, meaning the notices will come from
+`@server:<your_server_name>`. Once the Server Notices user is configured, that
+user id becomes a special, privileged user, so administrators should ensure
+that **it is not already allocated**.
+
+In order to support server notices, it is necessary to add some configuration
+to the `homeserver.yaml` file. In particular, you should add a `server_notices`
+section, which should look like this:
+
+```yaml
+server_notices:
+   system_mxid_localpart: server
+   system_mxid_display_name: "Server Notices"
+   system_mxid_avatar_url: "mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ"
+   room_name: "Server Notices"
+```
+
+The only compulsory setting is `system_mxid_localpart`, which defines the user
+id of the Server Notices user, as above. `room_name` defines the name of the
+room which will be created.
+
+`system_mxid_display_name` and `system_mxid_avatar_url` can be used to set the
+displayname and avatar of the Server Notices user.
+
+## Sending notices
+
+To send server notices to users you can use the
+[admin_api](admin_api/server_notices.md).
diff --git a/docs/usage/configuration/structured_logging.md b/docs/usage/configuration/structured_logging.md
new file mode 100644
index 0000000000..b1281667e0
--- /dev/null
+++ b/docs/usage/configuration/structured_logging.md
@@ -0,0 +1,161 @@
+# Structured Logging
+
+A structured logging system can be useful when your logs are destined for a
+machine to parse and process. By maintaining its machine-readable characteristics,
+it enables more efficient searching and aggregations when consumed by software
+such as the "ELK stack".
+
+Synapse's structured logging system is configured via the file that Synapse's
+`log_config` config option points to. The file should include a formatter which
+uses the `synapse.logging.TerseJsonFormatter` class included with Synapse and a
+handler which uses the above formatter.
+
+There is also a `synapse.logging.JsonFormatter` option which does not include
+a timestamp in the resulting JSON. This is useful if the log ingester adds its
+own timestamp.
+
+A structured logging configuration looks similar to the following:
+
+```yaml
+version: 1
+
+formatters:
+    structured:
+        class: synapse.logging.TerseJsonFormatter
+
+handlers:
+    file:
+        class: logging.handlers.TimedRotatingFileHandler
+        formatter: structured
+        filename: /path/to/my/logs/homeserver.log
+        when: midnight
+        backupCount: 3  # Does not include the current log file.
+        encoding: utf8
+
+loggers:
+    synapse:
+        level: INFO
+        handlers: [remote]
+    synapse.storage.SQL:
+        level: WARNING
+```
+
+The above logging config will set Synapse as 'INFO' logging level by default,
+with the SQL layer at 'WARNING', and will log to a file, stored as JSON.
+
+It is also possible to figure Synapse to log to a remote endpoint by using the
+`synapse.logging.RemoteHandler` class included with Synapse. It takes the
+following arguments:
+
+- `host`: Hostname or IP address of the log aggregator.
+- `port`: Numerical port to contact on the host.
+- `maximum_buffer`: (Optional, defaults to 1000) The maximum buffer size to allow.
+
+A remote structured logging configuration looks similar to the following:
+
+```yaml
+version: 1
+
+formatters:
+    structured:
+        class: synapse.logging.TerseJsonFormatter
+
+handlers:
+    remote:
+        class: synapse.logging.RemoteHandler
+        formatter: structured
+        host: 10.1.2.3
+        port: 9999
+
+loggers:
+    synapse:
+        level: INFO
+        handlers: [remote]
+    synapse.storage.SQL:
+        level: WARNING
+```
+
+The above logging config will set Synapse as 'INFO' logging level by default,
+with the SQL layer at 'WARNING', and will log JSON formatted messages to a
+remote endpoint at 10.1.2.3:9999.
+
+## Upgrading from legacy structured logging configuration
+
+Versions of Synapse prior to v1.23.0 included a custom structured logging
+configuration which is deprecated. It used a `structured: true` flag and
+configured `drains` instead of ``handlers`` and `formatters`.
+
+Synapse currently automatically converts the old configuration to the new
+configuration, but this will be removed in a future version of Synapse. The
+following reference can be used to update your configuration. Based on the drain
+`type`, we can pick a new handler:
+
+1. For a type of `console`, `console_json`, or `console_json_terse`: a handler
+   with a class of `logging.StreamHandler` and a `stream` of `ext://sys.stdout`
+   or `ext://sys.stderr` should be used.
+2. For a type of `file` or `file_json`: a handler of `logging.FileHandler` with
+   a location of the file path should be used.
+3. For a type of `network_json_terse`: a handler of `synapse.logging.RemoteHandler`
+   with the host and port should be used.
+
+Then based on the drain `type` we can pick a new formatter:
+
+1. For a type of `console` or `file` no formatter is necessary.
+2. For a type of `console_json` or `file_json`: a formatter of
+   `synapse.logging.JsonFormatter` should be used.
+3. For a type of `console_json_terse` or `network_json_terse`: a formatter of
+   `synapse.logging.TerseJsonFormatter` should be used.
+
+For each new handler and formatter they should be added to the logging configuration
+and then assigned to either a logger or the root logger.
+
+An example legacy configuration:
+
+```yaml
+structured: true
+
+loggers:
+    synapse:
+        level: INFO
+    synapse.storage.SQL:
+        level: WARNING
+
+drains:
+    console:
+        type: console
+        location: stdout
+    file:
+        type: file_json
+        location: homeserver.log
+```
+
+Would be converted into a new configuration:
+
+```yaml
+version: 1
+
+formatters:
+    json:
+        class: synapse.logging.JsonFormatter
+
+handlers:
+    console:
+        class: logging.StreamHandler
+        location: ext://sys.stdout
+    file:
+        class: logging.FileHandler
+        formatter: json
+        filename: homeserver.log
+
+loggers:
+    synapse:
+        level: INFO
+        handlers: [console, file]
+    synapse.storage.SQL:
+        level: WARNING
+```
+
+The new logging configuration is a bit more verbose, but significantly more
+flexible. It allows for configuration that were not previously possible, such as
+sending plain logs over the network, or using different handlers for different
+modules.
diff --git a/docs/usage/configuration/templates.md b/docs/usage/configuration/templates.md
new file mode 100644
index 0000000000..a240f58b54
--- /dev/null
+++ b/docs/usage/configuration/templates.md
@@ -0,0 +1,239 @@
+# Templates
+
+Synapse uses parametrised templates to generate the content of emails it sends and
+webpages it shows to users.
+
+By default, Synapse will use the templates listed [here](https://github.com/matrix-org/synapse/tree/master/synapse/res/templates).
+Server admins can configure an additional directory for Synapse to look for templates
+in, allowing them to specify custom templates:
+
+```yaml
+templates:
+  custom_templates_directory: /path/to/custom/templates/
+```
+
+If this setting is not set, or the files named below are not found within the directory,
+default templates from within the Synapse package will be used.
+
+Templates that are given variables when being rendered are rendered using [Jinja 2](https://jinja.palletsprojects.com/en/2.11.x/).
+Templates rendered by Jinja 2 can also access two functions on top of the functions
+already available as part of Jinja 2:
+
+```python
+format_ts(value: int, format: str) -> str
+```
+
+Formats a timestamp in milliseconds.
+
+Example: `reason.last_sent_ts|format_ts("%c")`
+
+```python
+mxc_to_http(value: str, width: int, height: int, resize_method: str = "crop") -> str
+```
+
+Turns a `mxc://` URL for media content into an HTTP(S) one using the homeserver's
+`public_baseurl` configuration setting as the URL's base.
+
+Example: `message.sender_avatar_url|mxc_to_http(32,32)`
+
+
+## Email templates
+
+Below are the templates Synapse will look for when generating the content of an email:
+
+* `notif_mail.html` and `notif_mail.txt`: The contents of email notifications of missed
+  events.
+  When rendering, this template is given the following variables:
+    * `user_display_name`: the display name for the user receiving the notification
+    * `unsubscribe_link`: the link users can click to unsubscribe from email notifications
+    * `summary_text`: a summary of the notification(s). The text used can be customised
+      by configuring the various settings in the `email.subjects` section of the
+      configuration file.
+    * `rooms`: a list of rooms containing events to include in the email. Each element is
+      an object with the following attributes:
+        * `title`: a human-readable name for the room
+        * `hash`: a hash of the ID of the room
+        * `invite`: a boolean, which is `True` if the room is an invite the user hasn't
+          accepted yet, `False` otherwise
+        * `notifs`: a list of events, or an empty list if `invite` is `True`. Each element
+          is an object with the following attributes:
+            * `link`: a `matrix.to` link to the event
+            * `ts`: the time in milliseconds at which the event was received
+            * `messages`: a list of messages containing one message before the event, the
+              message in the event, and one message after the event. Each element is an
+              object with the following attributes:
+                * `event_type`: the type of the event
+                * `is_historical`: a boolean, which is `False` if the message is the one
+                  that triggered the notification, `True` otherwise
+                * `id`: the ID of the event
+                * `ts`: the time in milliseconds at which the event was sent
+                * `sender_name`: the display name for the event's sender
+                * `sender_avatar_url`: the avatar URL (as a `mxc://` URL) for the event's
+                  sender
+                * `sender_hash`: a hash of the user ID of the sender
+        * `link`: a `matrix.to` link to the room
+    * `reason`: information on the event that triggered the email to be sent. It's an
+      object with the following attributes:
+        * `room_id`: the ID of the room the event was sent in
+        * `room_name`: a human-readable name for the room the event was sent in
+        * `now`: the current time in milliseconds
+        * `received_at`: the time in milliseconds at which the event was received
+        * `delay_before_mail_ms`: the amount of time in milliseconds Synapse always waits
+          before ever emailing about a notification (to give the user a chance to respond
+          to other push or notice the window)
+        * `last_sent_ts`: the time in milliseconds at which a notification was last sent
+          for an event in this room
+        * `throttle_ms`: the minimum amount of time in milliseconds between two
+          notifications can be sent for this room
+* `password_reset.html` and `password_reset.txt`: The contents of password reset emails
+  sent by the homeserver.
+  When rendering, these templates are given a `link` variable which contains the link the
+  user must click in order to reset their password.
+* `registration.html` and `registration.txt`: The contents of address verification emails
+  sent during registration.
+  When rendering, these templates are given a `link` variable which contains the link the
+  user must click in order to validate their email address.
+* `add_threepid.html` and `add_threepid.txt`: The contents of address verification emails
+  sent when an address is added to a Matrix account.
+  When rendering, these templates are given a `link` variable which contains the link the
+  user must click in order to validate their email address.
+
+
+## HTML page templates for registration and password reset
+
+Below are the templates Synapse will look for when generating pages related to
+registration and password reset:
+
+* `password_reset_confirmation.html`: An HTML page that a user will see when they follow
+  the link in the password reset email. The user will be asked to confirm the action
+  before their password is reset.
+  When rendering, this template is given the following variables:
+    * `sid`: the session ID for the password reset
+    * `token`: the token for the password reset
+    * `client_secret`: the client secret for the password reset
+* `password_reset_success.html` and `password_reset_failure.html`: HTML pages for success
+  and failure that a user will see when they confirm the password reset flow using the
+  page above.
+  When rendering, `password_reset_success.html` is given no variable, and
+  `password_reset_failure.html` is given a `failure_reason`, which contains the reason
+  for the password reset failure. 
+* `registration_success.html` and `registration_failure.html`: HTML pages for success and
+  failure that a user will see when they follow the link in an address verification email
+  sent during registration.
+  When rendering, `registration_success.html` is given no variable, and
+  `registration_failure.html` is given a `failure_reason`, which contains the reason
+  for the registration failure.
+* `add_threepid_success.html` and `add_threepid_failure.html`: HTML pages for success and
+  failure that a user will see when they follow the link in an address verification email
+  sent when an address is added to a Matrix account.
+  When rendering, `add_threepid_success.html` is given no variable, and
+  `add_threepid_failure.html` is given a `failure_reason`, which contains the reason
+  for the registration failure.
+
+
+## HTML page templates for Single Sign-On (SSO)
+
+Below are the templates Synapse will look for when generating pages related to SSO:
+
+* `sso_login_idp_picker.html`: HTML page to prompt the user to choose an
+  Identity Provider during login.
+  This is only used if multiple SSO Identity Providers are configured.
+  When rendering, this template is given the following variables:
+    * `redirect_url`: the URL that the user will be redirected to after
+      login.
+    * `server_name`: the homeserver's name.
+    * `providers`: a list of available Identity Providers. Each element is
+      an object with the following attributes:
+        * `idp_id`: unique identifier for the IdP
+        * `idp_name`: user-facing name for the IdP
+        * `idp_icon`: if specified in the IdP config, an MXC URI for an icon
+             for the IdP
+        * `idp_brand`: if specified in the IdP config, a textual identifier
+             for the brand of the IdP
+  The rendered HTML page should contain a form which submits its results
+  back as a GET request, with the following query parameters:
+    * `redirectUrl`: the client redirect URI (ie, the `redirect_url` passed
+      to the template)
+    * `idp`: the 'idp_id' of the chosen IDP.
+* `sso_auth_account_details.html`: HTML page to prompt new users to enter a
+  userid and confirm other details. This is only shown if the
+  SSO implementation (with any `user_mapping_provider`) does not return
+  a localpart.
+  When rendering, this template is given the following variables:
+    * `server_name`: the homeserver's name.
+    * `idp`: details of the SSO Identity Provider that the user logged in
+      with: an object with the following attributes:
+        * `idp_id`: unique identifier for the IdP
+        * `idp_name`: user-facing name for the IdP
+        * `idp_icon`: if specified in the IdP config, an MXC URI for an icon
+             for the IdP
+        * `idp_brand`: if specified in the IdP config, a textual identifier
+             for the brand of the IdP
+    * `user_attributes`: an object containing details about the user that
+      we received from the IdP. May have the following attributes:
+        * display_name: the user's display_name
+        * emails: a list of email addresses
+  The template should render a form which submits the following fields:
+    * `username`: the localpart of the user's chosen user id
+* `sso_new_user_consent.html`: HTML page allowing the user to consent to the
+  server's terms and conditions. This is only shown for new users, and only if
+  `user_consent.require_at_registration` is set.
+  When rendering, this template is given the following variables:
+    * `server_name`: the homeserver's name.
+    * `user_id`: the user's matrix proposed ID.
+    * `user_profile.display_name`: the user's proposed display name, if any.
+    * consent_version: the version of the terms that the user will be
+      shown
+    * `terms_url`: a link to the page showing the terms.
+  The template should render a form which submits the following fields:
+    * `accepted_version`: the version of the terms accepted by the user
+      (ie, 'consent_version' from the input variables).
+* `sso_redirect_confirm.html`: HTML page for a confirmation step before redirecting back
+  to the client with the login token.
+  When rendering, this template is given the following variables:
+    * `redirect_url`: the URL the user is about to be redirected to.
+    * `display_url`: the same as `redirect_url`, but with the query
+                   parameters stripped. The intention is to have a
+                   human-readable URL to show to users, not to use it as
+                   the final address to redirect to.
+    * `server_name`: the homeserver's name.
+    * `new_user`: a boolean indicating whether this is the user's first time
+         logging in.
+    * `user_id`: the user's matrix ID.
+    * `user_profile.avatar_url`: an MXC URI for the user's avatar, if any.
+          `None` if the user has not set an avatar.
+    * `user_profile.display_name`: the user's display name. `None` if the user
+          has not set a display name.
+* `sso_auth_confirm.html`: HTML page which notifies the user that they are authenticating
+  to confirm an operation on their account during the user interactive authentication
+  process.
+  When rendering, this template is given the following variables:
+    * `redirect_url`: the URL the user is about to be redirected to.
+    * `description`: the operation which the user is being asked to confirm
+    * `idp`: details of the Identity Provider that we will use to confirm
+      the user's identity: an object with the following attributes:
+        * `idp_id`: unique identifier for the IdP
+        * `idp_name`: user-facing name for the IdP
+        * `idp_icon`: if specified in the IdP config, an MXC URI for an icon
+             for the IdP
+        * `idp_brand`: if specified in the IdP config, a textual identifier
+             for the brand of the IdP
+* `sso_auth_success.html`: HTML page shown after a successful user interactive
+  authentication session.
+  Note that this page must include the JavaScript which notifies of a successful
+  authentication (see https://matrix.org/docs/spec/client_server/r0.6.0#fallback).
+  This template has no additional variables.
+* `sso_auth_bad_user.html`: HTML page shown after a user-interactive authentication
+  session which does not map correctly onto the expected user.
+  When rendering, this template is given the following variables:
+    * `server_name`: the homeserver's name.
+    * `user_id_to_verify`: the MXID of the user that we are trying to
+      validate.
+* `sso_account_deactivated.html`: HTML page shown during single sign-on if a deactivated
+  user (according to Synapse's database) attempts to login.
+  This template has no additional variables.
+* `sso_error.html`: HTML page to display to users if something goes wrong during the
+  OpenID Connect authentication process.
+  When rendering, this template is given two variables:
+    * `error`: the technical name of the error
+    * `error_description`: a human-readable message for the error
diff --git a/docs/usage/configuration/user_authentication/password_auth_providers.md b/docs/usage/configuration/user_authentication/password_auth_providers.md
new file mode 100644
index 0000000000..dc0dfffa21
--- /dev/null
+++ b/docs/usage/configuration/user_authentication/password_auth_providers.md
@@ -0,0 +1,129 @@
+<h2 style="color:red">
+This page of the Synapse documentation is now deprecated. For up to date
+documentation on setting up or writing a password auth provider module, please see
+<a href="modules/index.md">this page</a>.
+</h2>
+
+# Password auth provider modules
+
+Password auth providers offer a way for server administrators to
+integrate their Synapse installation with an existing authentication
+system.
+
+A password auth provider is a Python class which is dynamically loaded
+into Synapse, and provides a number of methods by which it can integrate
+with the authentication system.
+
+This document serves as a reference for those looking to implement their
+own password auth providers. Additionally, here is a list of known
+password auth provider module implementations:
+
+* [matrix-synapse-ldap3](https://github.com/matrix-org/matrix-synapse-ldap3/)
+* [matrix-synapse-shared-secret-auth](https://github.com/devture/matrix-synapse-shared-secret-auth)
+* [matrix-synapse-rest-password-provider](https://github.com/ma1uta/matrix-synapse-rest-password-provider)
+
+## Required methods
+
+Password auth provider classes must provide the following methods:
+
+* `parse_config(config)`
+  This method is passed the `config` object for this module from the
+  homeserver configuration file.
+
+  It should perform any appropriate sanity checks on the provided
+  configuration, and return an object which is then passed into
+  `__init__`.
+
+  This method should have the `@staticmethod` decoration.
+
+* `__init__(self, config, account_handler)`
+
+  The constructor is passed the config object returned by
+  `parse_config`, and a `synapse.module_api.ModuleApi` object which
+  allows the password provider to check if accounts exist and/or create
+  new ones.
+
+## Optional methods
+
+Password auth provider classes may optionally provide the following methods:
+
+* `get_db_schema_files(self)`
+
+  This method, if implemented, should return an Iterable of
+  `(name, stream)` pairs of database schema files. Each file is applied
+  in turn at initialisation, and a record is then made in the database
+  so that it is not re-applied on the next start.
+
+* `get_supported_login_types(self)`
+
+  This method, if implemented, should return a `dict` mapping from a
+  login type identifier (such as `m.login.password`) to an iterable
+  giving the fields which must be provided by the user in the submission
+  to [the `/login` API](https://matrix.org/docs/spec/client_server/latest#post-matrix-client-r0-login).
+  These fields are passed in the `login_dict` dictionary to `check_auth`.
+
+  For example, if a password auth provider wants to implement a custom
+  login type of `com.example.custom_login`, where the client is expected
+  to pass the fields `secret1` and `secret2`, the provider should
+  implement this method and return the following dict:
+
+  ```python
+  {"com.example.custom_login": ("secret1", "secret2")}
+  ```
+
+* `check_auth(self, username, login_type, login_dict)`
+
+  This method does the real work. If implemented, it
+  will be called for each login attempt where the login type matches one
+  of the keys returned by `get_supported_login_types`.
+
+  It is passed the (possibly unqualified) `user` field provided by the client,
+  the login type, and a dictionary of login secrets passed by the
+  client.
+
+  The method should return an `Awaitable` object, which resolves
+  to the canonical `@localpart:domain` user ID if authentication is
+  successful, and `None` if not.
+
+  Alternatively, the `Awaitable` can resolve to a `(str, func)` tuple, in
+  which case the second field is a callback which will be called with
+  the result from the `/login` call (including `access_token`,
+  `device_id`, etc.)
+
+* `check_3pid_auth(self, medium, address, password)`
+
+  This method, if implemented, is called when a user attempts to
+  register or log in with a third party identifier, such as email. It is
+  passed the medium (ex. "email"), an address (ex.
+  "<jdoe@example.com>") and the user's password.
+
+  The method should return an `Awaitable` object, which resolves
+  to a `str` containing the user's (canonical) User id if
+  authentication was successful, and `None` if not.
+
+  As with `check_auth`, the `Awaitable` may alternatively resolve to a
+  `(user_id, callback)` tuple.
+
+* `check_password(self, user_id, password)`
+
+  This method provides a simpler interface than
+  `get_supported_login_types` and `check_auth` for password auth
+  providers that just want to provide a mechanism for validating
+  `m.login.password` logins.
+
+  If implemented, it will be called to check logins with an
+  `m.login.password` login type. It is passed a qualified
+  `@localpart:domain` user id, and the password provided by the user.
+
+  The method should return an `Awaitable` object, which resolves
+  to `True` if authentication is successful, and `False` if not.
+
+* `on_logged_out(self, user_id, device_id, access_token)`
+
+  This method, if implemented, is called when a user logs out. It is
+  passed the qualified user ID, the ID of the deactivated device (if
+  any: access tokens are occasionally created without an associated
+  device ID), and the (now deactivated) access token.
+
+  It may return an `Awaitable` object; the logout request will
+  wait for the `Awaitable` to complete, but the result is ignored.
diff --git a/docs/usage/configuration/user_authentication/single_sign_on/openid.md b/docs/usage/configuration/user_authentication/single_sign_on/openid.md
new file mode 100644
index 0000000000..c74e8bda60
--- /dev/null
+++ b/docs/usage/configuration/user_authentication/single_sign_on/openid.md
@@ -0,0 +1,572 @@
+# Configuring Synapse to authenticate against an OpenID Connect provider
+
+Synapse can be configured to use an OpenID Connect Provider (OP) for
+authentication, instead of its own local password database.
+
+Any OP should work with Synapse, as long as it supports the authorization code
+flow. There are a few options for that:
+
+ - start a local OP. Synapse has been tested with [Hydra][hydra] and
+   [Dex][dex-idp].  Note that for an OP to work, it should be served under a
+   secure (HTTPS) origin.  A certificate signed with a self-signed, locally
+   trusted CA should work. In that case, start Synapse with a `SSL_CERT_FILE`
+   environment variable set to the path of the CA.
+
+ - set up a SaaS OP, like [Google][google-idp], [Auth0][auth0] or
+   [Okta][okta]. Synapse has been tested with Auth0 and Google.
+
+It may also be possible to use other OAuth2 providers which provide the
+[authorization code grant type](https://tools.ietf.org/html/rfc6749#section-4.1),
+such as [Github][github-idp].
+
+[google-idp]: https://developers.google.com/identity/protocols/oauth2/openid-connect
+[auth0]: https://auth0.com/
+[authentik]: https://goauthentik.io/
+[lemonldap]: https://lemonldap-ng.org/
+[okta]: https://www.okta.com/
+[dex-idp]: https://github.com/dexidp/dex
+[keycloak-idp]: https://www.keycloak.org/docs/latest/server_admin/#sso-protocols
+[hydra]: https://www.ory.sh/docs/hydra/
+[github-idp]: https://developer.github.com/apps/building-oauth-apps/authorizing-oauth-apps
+
+## Preparing Synapse
+
+The OpenID integration in Synapse uses the
+[`authlib`](https://pypi.org/project/Authlib/) library, which must be installed
+as follows:
+
+ * The relevant libraries are included in the Docker images and Debian packages
+   provided by `matrix.org` so no further action is needed.
+
+ * If you installed Synapse into a virtualenv, run `/path/to/env/bin/pip
+   install matrix-synapse[oidc]` to install the necessary dependencies.
+
+ * For other installation mechanisms, see the documentation provided by the
+   maintainer.
+
+To enable the OpenID integration, you should then add a section to the `oidc_providers`
+setting in your configuration file (or uncomment one of the existing examples).
+See [sample_config.yaml](./sample_config.yaml) for some sample settings, as well as
+the text below for example configurations for specific providers.
+
+## Sample configs
+
+Here are a few configs for providers that should work with Synapse.
+
+### Microsoft Azure Active Directory
+Azure AD can act as an OpenID Connect Provider. Register a new application under
+*App registrations* in the Azure AD management console. The RedirectURI for your
+application should point to your matrix server:
+`[synapse public baseurl]/_synapse/client/oidc/callback`
+
+Go to *Certificates & secrets* and register a new client secret. Make note of your
+Directory (tenant) ID as it will be used in the Azure links.
+Edit your Synapse config file and change the `oidc_config` section:
+
+```yaml
+oidc_providers:
+  - idp_id: microsoft
+    idp_name: Microsoft
+    issuer: "https://login.microsoftonline.com/<tenant id>/v2.0"
+    client_id: "<client id>"
+    client_secret: "<client secret>"
+    scopes: ["openid", "profile"]
+    authorization_endpoint: "https://login.microsoftonline.com/<tenant id>/oauth2/v2.0/authorize"
+    token_endpoint: "https://login.microsoftonline.com/<tenant id>/oauth2/v2.0/token"
+    userinfo_endpoint: "https://graph.microsoft.com/oidc/userinfo"
+
+    user_mapping_provider:
+      config:
+        localpart_template: "{{ user.preferred_username.split('@')[0] }}"
+        display_name_template: "{{ user.name }}"
+```
+
+### Dex
+
+[Dex][dex-idp] is a simple, open-source, certified OpenID Connect Provider.
+Although it is designed to help building a full-blown provider with an
+external database, it can be configured with static passwords in a config file.
+
+Follow the [Getting Started guide](https://dexidp.io/docs/getting-started/)
+to install Dex.
+
+Edit `examples/config-dev.yaml` config file from the Dex repo to add a client:
+
+```yaml
+staticClients:
+- id: synapse
+  secret: secret
+  redirectURIs:
+  - '[synapse public baseurl]/_synapse/client/oidc/callback'
+  name: 'Synapse'
+```
+
+Run with `dex serve examples/config-dev.yaml`.
+
+Synapse config:
+
+```yaml
+oidc_providers:
+  - idp_id: dex
+    idp_name: "My Dex server"
+    skip_verification: true # This is needed as Dex is served on an insecure endpoint
+    issuer: "http://127.0.0.1:5556/dex"
+    client_id: "synapse"
+    client_secret: "secret"
+    scopes: ["openid", "profile"]
+    user_mapping_provider:
+      config:
+        localpart_template: "{{ user.name }}"
+        display_name_template: "{{ user.name|capitalize }}"
+```
+### Keycloak
+
+[Keycloak][keycloak-idp] is an opensource IdP maintained by Red Hat.
+
+Follow the [Getting Started Guide](https://www.keycloak.org/getting-started) to install Keycloak and set up a realm.
+
+1. Click `Clients` in the sidebar and click `Create`
+
+2. Fill in the fields as below:
+
+| Field | Value |
+|-----------|-----------|
+| Client ID | `synapse` |
+| Client Protocol | `openid-connect` |
+
+3. Click `Save`
+4. Fill in the fields as below:
+
+| Field | Value |
+|-----------|-----------|
+| Client ID | `synapse` |
+| Enabled | `On` |
+| Client Protocol | `openid-connect` |
+| Access Type | `confidential` |
+| Valid Redirect URIs | `[synapse public baseurl]/_synapse/client/oidc/callback` |
+
+5. Click `Save`
+6. On the Credentials tab, update the fields:
+
+| Field | Value |
+|-------|-------|
+| Client Authenticator | `Client ID and Secret` |
+
+7. Click `Regenerate Secret`
+8. Copy Secret
+
+```yaml
+oidc_providers:
+  - idp_id: keycloak
+    idp_name: "My KeyCloak server"
+    issuer: "https://127.0.0.1:8443/auth/realms/{realm_name}"
+    client_id: "synapse"
+    client_secret: "copy secret generated from above"
+    scopes: ["openid", "profile"]
+    user_mapping_provider:
+      config:
+        localpart_template: "{{ user.preferred_username }}"
+        display_name_template: "{{ user.name }}"
+```
+### Auth0
+
+[Auth0][auth0] is a hosted SaaS IdP solution.
+
+1. Create a regular web application for Synapse
+2. Set the Allowed Callback URLs to `[synapse public baseurl]/_synapse/client/oidc/callback`
+3. Add a rule to add the `preferred_username` claim.
+   <details>
+    <summary>Code sample</summary>
+
+    ```js
+    function addPersistenceAttribute(user, context, callback) {
+      user.user_metadata = user.user_metadata || {};
+      user.user_metadata.preferred_username = user.user_metadata.preferred_username || user.user_id;
+      context.idToken.preferred_username = user.user_metadata.preferred_username;
+
+      auth0.users.updateUserMetadata(user.user_id, user.user_metadata)
+        .then(function(){
+            callback(null, user, context);
+        })
+        .catch(function(err){
+            callback(err);
+        });
+    }
+    ```
+  </details>
+
+Synapse config:
+
+```yaml
+oidc_providers:
+  - idp_id: auth0
+    idp_name: Auth0
+    issuer: "https://your-tier.eu.auth0.com/" # TO BE FILLED
+    client_id: "your-client-id" # TO BE FILLED
+    client_secret: "your-client-secret" # TO BE FILLED
+    scopes: ["openid", "profile"]
+    user_mapping_provider:
+      config:
+        localpart_template: "{{ user.preferred_username }}"
+        display_name_template: "{{ user.name }}"
+```
+
+### Authentik
+
+[Authentik][authentik] is an open-source IdP solution.
+
+1. Create a provider in Authentik, with type OAuth2/OpenID.
+2. The parameters are:
+- Client Type: Confidential
+- JWT Algorithm: RS256
+- Scopes: OpenID, Email and Profile
+- RSA Key: Select any available key
+- Redirect URIs: `[synapse public baseurl]/_synapse/client/oidc/callback`
+3. Create an application for synapse in Authentik and link it to the provider.
+4. Note the slug of your application, Client ID and Client Secret.
+
+Synapse config:
+```yaml
+oidc_providers:
+  - idp_id: authentik
+    idp_name: authentik
+    discover: true
+    issuer: "https://your.authentik.example.org/application/o/your-app-slug/" # TO BE FILLED: domain and slug
+    client_id: "your client id" # TO BE FILLED
+    client_secret: "your client secret" # TO BE FILLED
+    scopes:
+      - "openid"
+      - "profile"
+      - "email"
+    user_mapping_provider:
+      config:
+        localpart_template: "{{ user.preferred_username }}}"
+        display_name_template: "{{ user.preferred_username|capitalize }}" # TO BE FILLED: If your users have names in Authentik and you want those in Synapse, this should be replaced with user.name|capitalize.
+```
+
+### LemonLDAP
+
+[LemonLDAP::NG][lemonldap] is an open-source IdP solution.
+
+1. Create an OpenID Connect Relying Parties in LemonLDAP::NG
+2. The parameters are:
+- Client ID under the basic menu of the new Relying Parties (`Options > Basic >
+  Client ID`)
+- Client secret (`Options > Basic > Client secret`)
+- JWT Algorithm: RS256 within the security menu of the new Relying Parties
+  (`Options > Security > ID Token signature algorithm` and `Options > Security >
+  Access Token signature algorithm`)
+- Scopes: OpenID, Email and Profile
+- Allowed redirection addresses for login (`Options > Basic > Allowed
+  redirection addresses for login` ) :
+  `[synapse public baseurl]/_synapse/client/oidc/callback`
+
+Synapse config:
+```yaml
+oidc_providers:
+  - idp_id: lemonldap
+    idp_name: lemonldap
+    discover: true
+    issuer: "https://auth.example.org/" # TO BE FILLED: replace with your domain
+    client_id: "your client id" # TO BE FILLED
+    client_secret: "your client secret" # TO BE FILLED
+    scopes:
+      - "openid"
+      - "profile"
+      - "email"
+    user_mapping_provider:
+      config:
+        localpart_template: "{{ user.preferred_username }}}"
+        # TO BE FILLED: If your users have names in LemonLDAP::NG and you want those in Synapse, this should be replaced with user.name|capitalize or any valid filter.
+        display_name_template: "{{ user.preferred_username|capitalize }}"
+```
+
+### GitHub
+
+[GitHub][github-idp] is a bit special as it is not an OpenID Connect compliant provider, but
+just a regular OAuth2 provider.
+
+The [`/user` API endpoint](https://developer.github.com/v3/users/#get-the-authenticated-user)
+can be used to retrieve information on the authenticated user. As the Synapse
+login mechanism needs an attribute to uniquely identify users, and that endpoint
+does not return a `sub` property, an alternative `subject_claim` has to be set.
+
+1. Create a new OAuth application: https://github.com/settings/applications/new.
+2. Set the callback URL to `[synapse public baseurl]/_synapse/client/oidc/callback`.
+
+Synapse config:
+
+```yaml
+oidc_providers:
+  - idp_id: github
+    idp_name: Github
+    idp_brand: "github"  # optional: styling hint for clients
+    discover: false
+    issuer: "https://github.com/"
+    client_id: "your-client-id" # TO BE FILLED
+    client_secret: "your-client-secret" # TO BE FILLED
+    authorization_endpoint: "https://github.com/login/oauth/authorize"
+    token_endpoint: "https://github.com/login/oauth/access_token"
+    userinfo_endpoint: "https://api.github.com/user"
+    scopes: ["read:user"]
+    user_mapping_provider:
+      config:
+        subject_claim: "id"
+        localpart_template: "{{ user.login }}"
+        display_name_template: "{{ user.name }}"
+```
+
+### Google
+
+[Google][google-idp] is an OpenID certified authentication and authorisation provider.
+
+1. Set up a project in the Google API Console (see
+   https://developers.google.com/identity/protocols/oauth2/openid-connect#appsetup).
+2. Add an "OAuth Client ID" for a Web Application under "Credentials".
+3. Copy the Client ID and Client Secret, and add the following to your synapse config:
+   ```yaml
+   oidc_providers:
+     - idp_id: google
+       idp_name: Google
+       idp_brand: "google"  # optional: styling hint for clients
+       issuer: "https://accounts.google.com/"
+       client_id: "your-client-id" # TO BE FILLED
+       client_secret: "your-client-secret" # TO BE FILLED
+       scopes: ["openid", "profile"]
+       user_mapping_provider:
+         config:
+           localpart_template: "{{ user.given_name|lower }}"
+           display_name_template: "{{ user.name }}"
+   ```
+4. Back in the Google console, add this Authorized redirect URI: `[synapse
+   public baseurl]/_synapse/client/oidc/callback`.
+
+### Twitch
+
+1. Setup a developer account on [Twitch](https://dev.twitch.tv/)
+2. Obtain the OAuth 2.0 credentials by [creating an app](https://dev.twitch.tv/console/apps/)
+3. Add this OAuth Redirect URL: `[synapse public baseurl]/_synapse/client/oidc/callback`
+
+Synapse config:
+
+```yaml
+oidc_providers:
+  - idp_id: twitch
+    idp_name: Twitch
+    issuer: "https://id.twitch.tv/oauth2/"
+    client_id: "your-client-id" # TO BE FILLED
+    client_secret: "your-client-secret" # TO BE FILLED
+    client_auth_method: "client_secret_post"
+    user_mapping_provider:
+      config:
+        localpart_template: "{{ user.preferred_username }}"
+        display_name_template: "{{ user.name }}"
+```
+
+### GitLab
+
+1. Create a [new application](https://gitlab.com/profile/applications).
+2. Add the `read_user` and `openid` scopes.
+3. Add this Callback URL: `[synapse public baseurl]/_synapse/client/oidc/callback`
+
+Synapse config:
+
+```yaml
+oidc_providers:
+  - idp_id: gitlab
+    idp_name: Gitlab
+    idp_brand: "gitlab"  # optional: styling hint for clients
+    issuer: "https://gitlab.com/"
+    client_id: "your-client-id" # TO BE FILLED
+    client_secret: "your-client-secret" # TO BE FILLED
+    client_auth_method: "client_secret_post"
+    scopes: ["openid", "read_user"]
+    user_profile_method: "userinfo_endpoint"
+    user_mapping_provider:
+      config:
+        localpart_template: '{{ user.nickname }}'
+        display_name_template: '{{ user.name }}'
+```
+
+### Facebook
+
+Like Github, Facebook provide a custom OAuth2 API rather than an OIDC-compliant
+one so requires a little more configuration.
+
+0. You will need a Facebook developer account. You can register for one
+   [here](https://developers.facebook.com/async/registration/).
+1. On the [apps](https://developers.facebook.com/apps/) page of the developer
+   console, "Create App", and choose "Build Connected Experiences".
+2. Once the app is created, add "Facebook Login" and choose "Web". You don't
+   need to go through the whole form here.
+3. In the left-hand menu, open "Products"/"Facebook Login"/"Settings".
+   * Add `[synapse public baseurl]/_synapse/client/oidc/callback` as an OAuth Redirect
+     URL.
+4. In the left-hand menu, open "Settings/Basic". Here you can copy the "App ID"
+   and "App Secret" for use below.
+
+Synapse config:
+
+```yaml
+  - idp_id: facebook
+    idp_name: Facebook
+    idp_brand: "facebook"  # optional: styling hint for clients
+    discover: false
+    issuer: "https://facebook.com"
+    client_id: "your-client-id" # TO BE FILLED
+    client_secret: "your-client-secret" # TO BE FILLED
+    scopes: ["openid", "email"]
+    authorization_endpoint: https://facebook.com/dialog/oauth
+    token_endpoint: https://graph.facebook.com/v9.0/oauth/access_token
+    user_profile_method: "userinfo_endpoint"
+    userinfo_endpoint: "https://graph.facebook.com/v9.0/me?fields=id,name,email,picture"
+    user_mapping_provider:
+      config:
+        subject_claim: "id"
+        display_name_template: "{{ user.name }}"
+```
+
+Relevant documents:
+ * https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow
+ * Using Facebook's Graph API: https://developers.facebook.com/docs/graph-api/using-graph-api/
+ * Reference to the User endpoint: https://developers.facebook.com/docs/graph-api/reference/user
+
+### Gitea
+
+Gitea is, like Github, not an OpenID provider, but just an OAuth2 provider.
+
+The [`/user` API endpoint](https://try.gitea.io/api/swagger#/user/userGetCurrent)
+can be used to retrieve information on the authenticated user. As the Synapse
+login mechanism needs an attribute to uniquely identify users, and that endpoint
+does not return a `sub` property, an alternative `subject_claim` has to be set.
+
+1. Create a new application.
+2. Add this Callback URL: `[synapse public baseurl]/_synapse/client/oidc/callback`
+
+Synapse config:
+
+```yaml
+oidc_providers:
+  - idp_id: gitea
+    idp_name: Gitea
+    discover: false
+    issuer: "https://your-gitea.com/"
+    client_id: "your-client-id" # TO BE FILLED
+    client_secret: "your-client-secret" # TO BE FILLED
+    client_auth_method: client_secret_post
+    scopes: [] # Gitea doesn't support Scopes
+    authorization_endpoint: "https://your-gitea.com/login/oauth/authorize"
+    token_endpoint: "https://your-gitea.com/login/oauth/access_token"
+    userinfo_endpoint: "https://your-gitea.com/api/v1/user"
+    user_mapping_provider:
+      config:
+        subject_claim: "id"
+        localpart_template: "{{ user.login }}"
+        display_name_template: "{{ user.full_name }}"
+```
+
+### XWiki
+
+Install [OpenID Connect Provider](https://extensions.xwiki.org/xwiki/bin/view/Extension/OpenID%20Connect/OpenID%20Connect%20Provider/) extension in your [XWiki](https://www.xwiki.org) instance.
+
+Synapse config:
+
+```yaml
+oidc_providers:
+  - idp_id: xwiki
+    idp_name: "XWiki"
+    issuer: "https://myxwikihost/xwiki/oidc/"
+    client_id: "your-client-id" # TO BE FILLED
+    client_auth_method: none
+    scopes: ["openid", "profile"]
+    user_profile_method: "userinfo_endpoint"
+    user_mapping_provider:
+      config:
+        localpart_template: "{{ user.preferred_username }}"
+        display_name_template: "{{ user.name }}"
+```
+
+### Apple
+
+Configuring "Sign in with Apple" (SiWA) requires an Apple Developer account.
+
+You will need to create a new "Services ID" for SiWA, and create and download a
+private key with "SiWA" enabled.
+
+As well as the private key file, you will need:
+ * Client ID: the "identifier" you gave the "Services ID"
+ * Team ID: a 10-character ID associated with your developer account.
+ * Key ID: the 10-character identifier for the key.
+
+https://help.apple.com/developer-account/?lang=en#/dev77c875b7e has more
+documentation on setting up SiWA.
+
+The synapse config will look like this:
+
+```yaml
+  - idp_id: apple
+    idp_name: Apple
+    issuer: "https://appleid.apple.com"
+    client_id: "your-client-id" # Set to the "identifier" for your "ServicesID"
+    client_auth_method: "client_secret_post"
+    client_secret_jwt_key:
+      key_file: "/path/to/AuthKey_KEYIDCODE.p8"  # point to your key file
+      jwt_header:
+        alg: ES256
+        kid: "KEYIDCODE"   # Set to the 10-char Key ID
+      jwt_payload:
+        iss: TEAMIDCODE    # Set to the 10-char Team ID
+    scopes: ["name", "email", "openid"]
+    authorization_endpoint: https://appleid.apple.com/auth/authorize?response_mode=form_post
+    user_mapping_provider:
+      config:
+        email_template: "{{ user.email }}"
+```
+
+## Django OAuth Toolkit
+
+[django-oauth-toolkit](https://github.com/jazzband/django-oauth-toolkit) is a
+Django application providing out of the box all the endpoints, data and logic
+needed to add OAuth2 capabilities to your Django projects. It supports
+[OpenID Connect too](https://django-oauth-toolkit.readthedocs.io/en/latest/oidc.html).
+
+Configuration on Django's side:
+
+1. Add an application: https://example.com/admin/oauth2_provider/application/add/ and choose parameters like this:
+* `Redirect uris`: https://synapse.example.com/_synapse/client/oidc/callback
+* `Client type`: `Confidential`
+* `Authorization grant type`: `Authorization code`
+* `Algorithm`: `HMAC with SHA-2 256`
+2. You can [customize the claims](https://django-oauth-toolkit.readthedocs.io/en/latest/oidc.html#customizing-the-oidc-responses) Django gives to synapse (optional):
+   <details>
+    <summary>Code sample</summary>
+
+    ```python
+    class CustomOAuth2Validator(OAuth2Validator):
+
+        def get_additional_claims(self, request):
+            return {
+                "sub": request.user.email,
+                "email": request.user.email,
+                "first_name": request.user.first_name,
+                "last_name": request.user.last_name,
+            }
+    ```
+   </details>
+Your synapse config is then:
+
+```yaml
+oidc_providers:
+  - idp_id: django_example
+    idp_name: "Django Example"
+    issuer: "https://example.com/o/"
+    client_id: "your-client-id"  # CHANGE ME
+    client_secret: "your-client-secret"  # CHANGE ME
+    scopes: ["openid"]
+    user_profile_method: "userinfo_endpoint"  # needed because oauth-toolkit does not include user information in the authorization response
+    user_mapping_provider:
+      config:
+        localpart_template: "{{ user.email.split('@')[0] }}"
+        display_name_template: "{{ user.first_name }} {{ user.last_name }}"
+        email_template: "{{ user.email }}"
+```
diff --git a/docs/usage/configuration/user_authentication/single_sign_on/sso_mapping_providers.md b/docs/usage/configuration/user_authentication/single_sign_on/sso_mapping_providers.md
new file mode 100644
index 0000000000..7a407012e0
--- /dev/null
+++ b/docs/usage/configuration/user_authentication/single_sign_on/sso_mapping_providers.md
@@ -0,0 +1,197 @@
+# SSO Mapping Providers
+
+A mapping provider is a Python class (loaded via a Python module) that
+works out how to map attributes of a SSO response to Matrix-specific
+user attributes. Details such as user ID localpart, displayname, and even avatar
+URLs are all things that can be mapped from talking to a SSO service.
+
+As an example, a SSO service may return the email address
+"john.smith@example.com" for a user, whereas Synapse will need to figure out how
+to turn that into a displayname when creating a Matrix user for this individual.
+It may choose `John Smith`, or `Smith, John [Example.com]` or any number of
+variations. As each Synapse configuration may want something different, this is
+where SAML mapping providers come into play.
+
+SSO mapping providers are currently supported for OpenID and SAML SSO
+configurations. Please see the details below for how to implement your own.
+
+It is up to the mapping provider whether the user should be assigned a predefined
+Matrix ID based on the SSO attributes, or if the user should be allowed to
+choose their own username.
+
+In the first case - where users are automatically allocated a Matrix ID - it is
+the responsibility of the mapping provider to normalise the SSO attributes and
+map them to a valid Matrix ID. The [specification for Matrix
+IDs](https://matrix.org/docs/spec/appendices#user-identifiers) has some
+information about what is considered valid.
+
+If the mapping provider does not assign a Matrix ID, then Synapse will
+automatically serve an HTML page allowing the user to pick their own username.
+
+External mapping providers are provided to Synapse in the form of an external
+Python module. You can retrieve this module from [PyPI](https://pypi.org) or elsewhere,
+but it must be importable via Synapse (e.g. it must be in the same virtualenv
+as Synapse). The Synapse config is then modified to point to the mapping provider
+(and optionally provide additional configuration for it).
+
+## OpenID Mapping Providers
+
+The OpenID mapping provider can be customized by editing the
+`oidc_config.user_mapping_provider.module` config option.
+
+`oidc_config.user_mapping_provider.config` allows you to provide custom
+configuration options to the module. Check with the module's documentation for
+what options it provides (if any). The options listed by default are for the
+user mapping provider built in to Synapse. If using a custom module, you should
+comment these options out and use those specified by the module instead.
+
+### Building a Custom OpenID Mapping Provider
+
+A custom mapping provider must specify the following methods:
+
+* `__init__(self, parsed_config)`
+   - Arguments:
+     - `parsed_config` - A configuration object that is the return value of the
+       `parse_config` method. You should set any configuration options needed by
+       the module here.
+* `parse_config(config)`
+    - This method should have the `@staticmethod` decoration.
+    - Arguments:
+        - `config` - A `dict` representing the parsed content of the
+          `oidc_config.user_mapping_provider.config` homeserver config option.
+           Runs on homeserver startup. Providers should extract and validate
+           any option values they need here.
+    - Whatever is returned will be passed back to the user mapping provider module's
+      `__init__` method during construction.
+* `get_remote_user_id(self, userinfo)`
+    - Arguments:
+      - `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
+                     information from.
+    - This method must return a string, which is the unique, immutable identifier
+      for the user. Commonly the `sub` claim of the response.
+* `map_user_attributes(self, userinfo, token, failures)`
+    - This method must be async.
+    - Arguments:
+      - `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
+                     information from.
+      - `token` - A dictionary which includes information necessary to make
+                  further requests to the OpenID provider.
+      - `failures` - An `int` that represents the amount of times the returned
+                     mxid localpart mapping has failed.  This should be used
+                     to create a deduplicated mxid localpart which should be
+                     returned instead. For example, if this method returns
+                     `john.doe` as the value of `localpart` in the returned
+                     dict, and that is already taken on the homeserver, this
+                     method will be called again with the same parameters but
+                     with failures=1. The method should then return a different
+                     `localpart` value, such as `john.doe1`.
+    - Returns a dictionary with two keys:
+      - `localpart`: A string, used to generate the Matrix ID. If this is
+        `None`, the user is prompted to pick their own username. This is only used
+        during a user's first login. Once a localpart has been associated with a
+        remote user ID (see `get_remote_user_id`) it cannot be updated.
+      - `displayname`: An optional string, the display name for the user.
+* `get_extra_attributes(self, userinfo, token)`
+    - This method must be async.
+    - Arguments:
+      - `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
+                     information from.
+      - `token` - A dictionary which includes information necessary to make
+                  further requests to the OpenID provider.
+    - Returns a dictionary that is suitable to be serialized to JSON. This
+      will be returned as part of the response during a successful login.
+
+      Note that care should be taken to not overwrite any of the parameters
+      usually returned as part of the [login response](https://matrix.org/docs/spec/client_server/latest#post-matrix-client-r0-login).
+
+### Default OpenID Mapping Provider
+
+Synapse has a built-in OpenID mapping provider if a custom provider isn't
+specified in the config. It is located at
+[`synapse.handlers.oidc.JinjaOidcMappingProvider`](https://github.com/matrix-org/synapse/blob/develop/synapse/handlers/oidc.py).
+
+## SAML Mapping Providers
+
+The SAML mapping provider can be customized by editing the
+`saml2_config.user_mapping_provider.module` config option.
+
+`saml2_config.user_mapping_provider.config` allows you to provide custom
+configuration options to the module. Check with the module's documentation for
+what options it provides (if any). The options listed by default are for the
+user mapping provider built in to Synapse. If using a custom module, you should
+comment these options out and use those specified by the module instead.
+
+### Building a Custom SAML Mapping Provider
+
+A custom mapping provider must specify the following methods:
+
+* `__init__(self, parsed_config, module_api)`
+   - Arguments:
+     - `parsed_config` - A configuration object that is the return value of the
+       `parse_config` method. You should set any configuration options needed by
+       the module here.
+     - `module_api` - a `synapse.module_api.ModuleApi` object which provides the
+       stable API available for extension modules.
+* `parse_config(config)`
+    - This method should have the `@staticmethod` decoration.
+    - Arguments:
+        - `config` - A `dict` representing the parsed content of the
+          `saml_config.user_mapping_provider.config` homeserver config option.
+           Runs on homeserver startup. Providers should extract and validate
+           any option values they need here.
+    - Whatever is returned will be passed back to the user mapping provider module's
+      `__init__` method during construction.
+* `get_saml_attributes(config)`
+    - This method should have the `@staticmethod` decoration.
+    - Arguments:
+        - `config` - A object resulting from a call to `parse_config`.
+    - Returns a tuple of two sets. The first set equates to the SAML auth
+      response attributes that are required for the module to function, whereas
+      the second set consists of those attributes which can be used if available,
+      but are not necessary.
+* `get_remote_user_id(self, saml_response, client_redirect_url)`
+    - Arguments:
+      - `saml_response` - A `saml2.response.AuthnResponse` object to extract user
+                          information from.
+      - `client_redirect_url` - A string, the URL that the client will be
+                                redirected to.
+    - This method must return a string, which is the unique, immutable identifier
+      for the user. Commonly the `uid` claim of the response.
+* `saml_response_to_user_attributes(self, saml_response, failures, client_redirect_url)`
+    - Arguments:
+      - `saml_response` - A `saml2.response.AuthnResponse` object to extract user
+                          information from.
+      - `failures` - An `int` that represents the amount of times the returned
+                     mxid localpart mapping has failed.  This should be used
+                     to create a deduplicated mxid localpart which should be
+                     returned instead. For example, if this method returns
+                     `john.doe` as the value of `mxid_localpart` in the returned
+                     dict, and that is already taken on the homeserver, this
+                     method will be called again with the same parameters but
+                     with failures=1. The method should then return a different
+                     `mxid_localpart` value, such as `john.doe1`.
+      - `client_redirect_url` - A string, the URL that the client will be
+                                redirected to.
+    - This method must return a dictionary, which will then be used by Synapse
+      to build a new user. The following keys are allowed:
+       * `mxid_localpart` - A string, the mxid localpart of the new user. If this is
+         `None`, the user is prompted to pick their own username. This is only used
+         during a user's first login. Once a localpart has been associated with a
+         remote user ID (see `get_remote_user_id`) it cannot be updated.
+       * `displayname` - The displayname of the new user. If not provided, will default to
+                         the value of `mxid_localpart`.
+       * `emails` - A list of emails for the new user. If not provided, will
+                    default to an empty list.
+
+       Alternatively it can raise a `synapse.api.errors.RedirectException` to
+       redirect the user to another page. This is useful to prompt the user for
+       additional information, e.g. if you want them to provide their own username.
+       It is the responsibility of the mapping provider to either redirect back
+       to `client_redirect_url` (including any additional information) or to
+       complete registration using methods from the `ModuleApi`.
+
+### Default SAML Mapping Provider
+
+Synapse has a built-in SAML mapping provider if a custom provider isn't
+specified in the config. It is located at
+[`synapse.handlers.saml.DefaultSamlMappingProvider`](https://github.com/matrix-org/synapse/blob/develop/synapse/handlers/saml.py).
diff --git a/docs/usage/configuration/user_directory.md b/docs/usage/configuration/user_directory.md
new file mode 100644
index 0000000000..c4794b04cf
--- /dev/null
+++ b/docs/usage/configuration/user_directory.md
@@ -0,0 +1,49 @@
+User Directory API Implementation
+=================================
+
+The user directory is currently maintained based on the 'visible' users
+on this particular server - i.e. ones which your account shares a room with, or
+who are present in a publicly viewable room present on the server.
+
+The directory info is stored in various tables, which can (typically after
+DB corruption) get stale or out of sync. If this happens, for now the
+solution to fix it is to use the [admin API](usage/administration/admin_api/background_updates.md#run)
+and execute the job `regenerate_directory`. This should then start a background task to
+flush the current tables and regenerate the directory.
+
+Data model
+----------
+
+There are five relevant tables that collectively form the "user directory".
+Three of them track a master list of all the users we could search for.
+The last two (collectively called the "search tables") track who can
+see who.
+
+From all of these tables we exclude three types of local user:
+  - support users
+  - appservice users
+  - deactivated users
+
+* `user_directory`. This contains the user_id, display name and avatar we'll
+  return when you search the directory.
+  - Because there's only one directory entry per user, it's important that we only
+    ever put publicly visible names here. Otherwise we might leak a private
+    nickname or avatar used in a private room.
+  - Indexed on rooms. Indexed on users.
+
+* `user_directory_search`. To be joined to `user_directory`. It contains an extra
+  column that enables full text search based on user ids and display names.
+  Different schemas for SQLite and Postgres with different code paths to match.
+  - Indexed on the full text search data. Indexed on users.
+
+* `user_directory_stream_pos`. When the initial background update to populate
+  the directory is complete, we record a stream position here. This indicates
+  that synapse should now listen for room changes and incrementally update
+  the directory where necessary.
+
+* `users_in_public_rooms`. Contains associations between users and the public rooms they're in.
+  Used to determine which users are in public rooms and should be publicly visible in the directory.
+
+* `users_who_share_private_rooms`. Rows are triples `(L, M, room id)` where `L`
+   is a local user and `M` is a local or remote user. `L` and `M` should be
+   different, but this isn't enforced by a constraint.
diff --git a/docs/usage/configuration/workers/README.md b/docs/usage/configuration/workers/README.md
new file mode 100644
index 0000000000..17c8bfeef6
--- /dev/null
+++ b/docs/usage/configuration/workers/README.md
@@ -0,0 +1,560 @@
+# Scaling synapse via workers
+
+For small instances it recommended to run Synapse in the default monolith mode.
+For larger instances where performance is a concern it can be helpful to split
+out functionality into multiple separate python processes. These processes are
+called 'workers', and are (eventually) intended to scale horizontally
+independently.
+
+Synapse's worker support is under active development and subject to change as
+we attempt to rapidly scale ever larger Synapse instances. However we are
+documenting it here to help admins needing a highly scalable Synapse instance
+similar to the one running `matrix.org`.
+
+All processes continue to share the same database instance, and as such,
+workers only work with PostgreSQL-based Synapse deployments. SQLite should only
+be used for demo purposes and any admin considering workers should already be
+running PostgreSQL.
+
+See also [Matrix.org blog post](https://matrix.org/blog/2020/11/03/how-we-fixed-synapses-scalability)
+for a higher level overview.
+
+## Main process/worker communication
+
+The processes communicate with each other via a Synapse-specific protocol called
+'replication' (analogous to MySQL- or Postgres-style database replication) which
+feeds streams of newly written data between processes so they can be kept in
+sync with the database state.
+
+When configured to do so, Synapse uses a
+[Redis pub/sub channel](https://redis.io/topics/pubsub) to send the replication
+stream between all configured Synapse processes. Additionally, processes may
+make HTTP requests to each other, primarily for operations which need to wait
+for a reply ─ such as sending an event.
+
+Redis support was added in v1.13.0 with it becoming the recommended method in
+v1.18.0. It replaced the old direct TCP connections (which is deprecated as of
+v1.18.0) to the main process. With Redis, rather than all the workers connecting
+to the main process, all the workers and the main process connect to Redis,
+which relays replication commands between processes. This can give a significant
+cpu saving on the main process and will be a prerequisite for upcoming
+performance improvements.
+
+If Redis support is enabled Synapse will use it as a shared cache, as well as a
+pub/sub mechanism.
+
+See the [Architectural diagram](#architectural-diagram) section at the end for
+a visualisation of what this looks like.
+
+
+## Setting up workers
+
+A Redis server is required to manage the communication between the processes.
+The Redis server should be installed following the normal procedure for your
+distribution (e.g. `apt install redis-server` on Debian). It is safe to use an
+existing Redis deployment if you have one.
+
+Once installed, check that Redis is running and accessible from the host running
+Synapse, for example by executing `echo PING | nc -q1 localhost 6379` and seeing
+a response of `+PONG`.
+
+The appropriate dependencies must also be installed for Synapse. If using a
+virtualenv, these can be installed with:
+
+```sh
+pip install "matrix-synapse[redis]"
+```
+
+Note that these dependencies are included when synapse is installed with `pip
+install matrix-synapse[all]`. They are also included in the debian packages from
+`matrix.org` and in the docker images at
+https://hub.docker.com/r/matrixdotorg/synapse/.
+
+To make effective use of the workers, you will need to configure an HTTP
+reverse-proxy such as nginx or haproxy, which will direct incoming requests to
+the correct worker, or to the main synapse instance. See
+[the reverse proxy documentation](reverse_proxy.md) for information on setting up a reverse
+proxy.
+
+When using workers, each worker process has its own configuration file which
+contains settings specific to that worker, such as the HTTP listener that it
+provides (if any), logging configuration, etc.
+
+Normally, the worker processes are configured to read from a shared
+configuration file as well as the worker-specific configuration files. This
+makes it easier to keep common configuration settings synchronised across all
+the processes.
+
+The main process is somewhat special in this respect: it does not normally
+need its own configuration file and can take all of its configuration from the
+shared configuration file.
+
+
+### Shared configuration
+
+Normally, only a couple of changes are needed to make an existing configuration
+file suitable for use with workers. First, you need to enable an "HTTP replication
+listener" for the main process; and secondly, you need to enable redis-based
+replication. Optionally, a shared secret can be used to authenticate HTTP
+traffic between workers. For example:
+
+
+```yaml
+# extend the existing `listeners` section. This defines the ports that the
+# main process will listen on.
+listeners:
+  # The HTTP replication port
+  - port: 9093
+    bind_address: '127.0.0.1'
+    type: http
+    resources:
+     - names: [replication]
+
+# Add a random shared secret to authenticate traffic.
+worker_replication_secret: ""
+
+redis:
+    enabled: true
+```
+
+See the sample config for the full documentation of each option.
+
+Under **no circumstances** should the replication listener be exposed to the
+public internet; it has no authentication and is unencrypted.
+
+
+### Worker configuration
+
+In the config file for each worker, you must specify the type of worker
+application (`worker_app`), and you should specify a unique name for the worker
+(`worker_name`). The currently available worker applications are listed below.
+You must also specify the HTTP replication endpoint that it should talk to on
+the main synapse process.  `worker_replication_host` should specify the host of
+the main synapse and `worker_replication_http_port` should point to the HTTP
+replication port. If the worker will handle HTTP requests then the
+`worker_listeners` option should be set with a `http` listener, in the same way
+as the `listeners` option in the shared config.
+
+For example:
+
+```yaml
+worker_app: synapse.app.generic_worker
+worker_name: worker1
+
+# The replication listener on the main synapse process.
+worker_replication_host: 127.0.0.1
+worker_replication_http_port: 9093
+
+worker_listeners:
+ - type: http
+   port: 8083
+   resources:
+     - names:
+       - client
+       - federation
+
+worker_log_config: /home/matrix/synapse/config/worker1_log_config.yaml
+```
+
+...is a full configuration for a generic worker instance, which will expose a
+plain HTTP endpoint on port 8083 separately serving various endpoints, e.g.
+`/sync`, which are listed below.
+
+Obviously you should configure your reverse-proxy to route the relevant
+endpoints to the worker (`localhost:8083` in the above example).
+
+
+### Running Synapse with workers
+
+Finally, you need to start your worker processes. This can be done with either
+`synctl` or your distribution's preferred service manager such as `systemd`. We
+recommend the use of `systemd` where available: for information on setting up
+`systemd` to start synapse workers, see
+[Systemd with Workers](systemd-with-workers). To use `synctl`, see
+[Using synctl with Workers](synctl_workers.md).
+
+
+## Available worker applications
+
+### `synapse.app.generic_worker`
+
+This worker can handle API requests matching the following regular
+expressions:
+
+    # Sync requests
+    ^/_matrix/client/(v2_alpha|r0|v3)/sync$
+    ^/_matrix/client/(api/v1|v2_alpha|r0|v3)/events$
+    ^/_matrix/client/(api/v1|r0|v3)/initialSync$
+    ^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
+
+    # Federation requests
+    ^/_matrix/federation/v1/event/
+    ^/_matrix/federation/v1/state/
+    ^/_matrix/federation/v1/state_ids/
+    ^/_matrix/federation/v1/backfill/
+    ^/_matrix/federation/v1/get_missing_events/
+    ^/_matrix/federation/v1/publicRooms
+    ^/_matrix/federation/v1/query/
+    ^/_matrix/federation/v1/make_join/
+    ^/_matrix/federation/v1/make_leave/
+    ^/_matrix/federation/v1/send_join/
+    ^/_matrix/federation/v2/send_join/
+    ^/_matrix/federation/v1/send_leave/
+    ^/_matrix/federation/v2/send_leave/
+    ^/_matrix/federation/v1/invite/
+    ^/_matrix/federation/v2/invite/
+    ^/_matrix/federation/v1/query_auth/
+    ^/_matrix/federation/v1/event_auth/
+    ^/_matrix/federation/v1/exchange_third_party_invite/
+    ^/_matrix/federation/v1/user/devices/
+    ^/_matrix/federation/v1/get_groups_publicised$
+    ^/_matrix/key/v2/query
+    ^/_matrix/federation/unstable/org.matrix.msc2946/spaces/
+    ^/_matrix/federation/unstable/org.matrix.msc2946/hierarchy/
+
+    # Inbound federation transaction request
+    ^/_matrix/federation/v1/send/
+
+    # Client API requests
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/createRoom$
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/publicRooms$
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/joined_members$
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/context/.*$
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/members$
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state$
+    ^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/spaces$
+    ^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/hierarchy$
+    ^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/account/3pid$
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/devices$
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/keys/query$
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/keys/changes$
+    ^/_matrix/client/versions$
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/voip/turnServer$
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/joined_groups$
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/publicised_groups$
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/publicised_groups/
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/event/
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/joined_rooms$
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/search$
+
+    # Registration/login requests
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/login$
+    ^/_matrix/client/(r0|v3|unstable)/register$
+    ^/_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity$
+
+    # Event sending requests
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/send
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state/
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/join/
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/profile/
+
+
+Additionally, the following REST endpoints can be handled for GET requests:
+
+    ^/_matrix/federation/v1/groups/
+
+Pagination requests can also be handled, but all requests for a given
+room must be routed to the same instance. Additionally, care must be taken to
+ensure that the purge history admin API is not used while pagination requests
+for the room are in flight:
+
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/messages$
+
+Additionally, the following endpoints should be included if Synapse is configured
+to use SSO (you only need to include the ones for whichever SSO provider you're
+using):
+
+    # for all SSO providers
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/login/sso/redirect
+    ^/_synapse/client/pick_idp$
+    ^/_synapse/client/pick_username
+    ^/_synapse/client/new_user_consent$
+    ^/_synapse/client/sso_register$
+
+    # OpenID Connect requests.
+    ^/_synapse/client/oidc/callback$
+
+    # SAML requests.
+    ^/_synapse/client/saml2/authn_response$
+
+    # CAS requests.
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/login/cas/ticket$
+
+Ensure that all SSO logins go to a single process.
+For multiple workers not handling the SSO endpoints properly, see
+[#7530](https://github.com/matrix-org/synapse/issues/7530) and
+[#9427](https://github.com/matrix-org/synapse/issues/9427).
+
+Note that a HTTP listener with `client` and `federation` resources must be
+configured in the `worker_listeners` option in the worker config.
+
+#### Load balancing
+
+It is possible to run multiple instances of this worker app, with incoming requests
+being load-balanced between them by the reverse-proxy. However, different endpoints
+have different characteristics and so admins
+may wish to run multiple groups of workers handling different endpoints so that
+load balancing can be done in different ways.
+
+For `/sync` and `/initialSync` requests it will be more efficient if all
+requests from a particular user are routed to a single instance. Extracting a
+user ID from the access token or `Authorization` header is currently left as an
+exercise for the reader. Admins may additionally wish to separate out `/sync`
+requests that have a `since` query parameter from those that don't (and
+`/initialSync`), as requests that don't are known as "initial sync" that happens
+when a user logs in on a new device and can be *very* resource intensive, so
+isolating these requests will stop them from interfering with other users ongoing
+syncs.
+
+Federation and client requests can be balanced via simple round robin.
+
+The inbound federation transaction request `^/_matrix/federation/v1/send/`
+should be balanced by source IP so that transactions from the same remote server
+go to the same process.
+
+Registration/login requests can be handled separately purely to help ensure that
+unexpected load doesn't affect new logins and sign ups.
+
+Finally, event sending requests can be balanced by the room ID in the URI (or
+the full URI, or even just round robin), the room ID is the path component after
+`/rooms/`. If there is a large bridge connected that is sending or may send lots
+of events, then a dedicated set of workers can be provisioned to limit the
+effects of bursts of events from that bridge on events sent by normal users.
+
+#### Stream writers
+
+Additionally, there is *experimental* support for moving writing of specific
+streams (such as events) off of the main process to a particular worker. (This
+is only supported with Redis-based replication.)
+
+Currently supported streams are `events` and `typing`.
+
+To enable this, the worker must have a HTTP replication listener configured,
+have a `worker_name` and be listed in the `instance_map` config. For example to
+move event persistence off to a dedicated worker, the shared configuration would
+include:
+
+```yaml
+instance_map:
+    event_persister1:
+        host: localhost
+        port: 8034
+
+stream_writers:
+    events: event_persister1
+```
+
+The `events` stream also experimentally supports having multiple writers, where
+work is sharded between them by room ID. Note that you *must* restart all worker
+instances when adding or removing event persisters. An example `stream_writers`
+configuration with multiple writers:
+
+```yaml
+stream_writers:
+    events:
+        - event_persister1
+        - event_persister2
+```
+
+#### Background tasks
+
+There is also *experimental* support for moving background tasks to a separate
+worker. Background tasks are run periodically or started via replication. Exactly
+which tasks are configured to run depends on your Synapse configuration (e.g. if
+stats is enabled).
+
+To enable this, the worker must have a `worker_name` and can be configured to run
+background tasks. For example, to move background tasks to a dedicated worker,
+the shared configuration would include:
+
+```yaml
+run_background_tasks_on: background_worker
+```
+
+You might also wish to investigate the `update_user_directory` and
+`media_instance_running_background_jobs` settings.
+
+### `synapse.app.pusher`
+
+Handles sending push notifications to sygnal and email. Doesn't handle any
+REST endpoints itself, but you should set `start_pushers: False` in the
+shared configuration file to stop the main synapse sending push notifications.
+
+To run multiple instances at once the `pusher_instances` option should list all
+pusher instances by their worker name, e.g.:
+
+```yaml
+pusher_instances:
+    - pusher_worker1
+    - pusher_worker2
+```
+
+
+### `synapse.app.appservice`
+
+Handles sending output traffic to Application Services. Doesn't handle any
+REST endpoints itself, but you should set `notify_appservices: False` in the
+shared configuration file to stop the main synapse sending appservice notifications.
+
+Note this worker cannot be load-balanced: only one instance should be active.
+
+
+### `synapse.app.federation_sender`
+
+Handles sending federation traffic to other servers. Doesn't handle any
+REST endpoints itself, but you should set `send_federation: False` in the
+shared configuration file to stop the main synapse sending this traffic.
+
+If running multiple federation senders then you must list each
+instance in the `federation_sender_instances` option by their `worker_name`.
+All instances must be stopped and started when adding or removing instances.
+For example:
+
+```yaml
+federation_sender_instances:
+    - federation_sender1
+    - federation_sender2
+```
+
+### `synapse.app.media_repository`
+
+Handles the media repository. It can handle all endpoints starting with:
+
+    /_matrix/media/
+
+... and the following regular expressions matching media-specific administration APIs:
+
+    ^/_synapse/admin/v1/purge_media_cache$
+    ^/_synapse/admin/v1/room/.*/media.*$
+    ^/_synapse/admin/v1/user/.*/media.*$
+    ^/_synapse/admin/v1/media/.*$
+    ^/_synapse/admin/v1/quarantine_media/.*$
+    ^/_synapse/admin/v1/users/.*/media$
+
+You should also set `enable_media_repo: False` in the shared configuration
+file to stop the main synapse running background jobs related to managing the
+media repository. Note that doing so will prevent the main process from being
+able to handle the above endpoints.
+
+In the `media_repository` worker configuration file, configure the http listener to
+expose the `media` resource. For example:
+
+```yaml
+worker_listeners:
+ - type: http
+   port: 8085
+   resources:
+     - names:
+       - media
+```
+
+Note that if running multiple media repositories they must be on the same server
+and you must configure a single instance to run the background tasks, e.g.:
+
+```yaml
+media_instance_running_background_jobs: "media-repository-1"
+```
+
+Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for both inbound client and federation requests (if they are handled separately).
+
+### `synapse.app.user_dir`
+
+Handles searches in the user directory. It can handle REST endpoints matching
+the following regular expressions:
+
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/user_directory/search$
+
+When using this worker you must also set `update_user_directory: False` in the
+shared configuration file to stop the main synapse running background
+jobs related to updating the user directory.
+
+### `synapse.app.frontend_proxy`
+
+Proxies some frequently-requested client endpoints to add caching and remove
+load from the main synapse. It can handle REST endpoints matching the following
+regular expressions:
+
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload
+
+If `use_presence` is False in the homeserver config, it can also handle REST
+endpoints matching the following regular expressions:
+
+    ^/_matrix/client/(api/v1|r0|v3|unstable)/presence/[^/]+/status
+
+This "stub" presence handler will pass through `GET` request but make the
+`PUT` effectively a no-op.
+
+It will proxy any requests it cannot handle to the main synapse instance. It
+must therefore be configured with the location of the main instance, via
+the `worker_main_http_uri` setting in the `frontend_proxy` worker configuration
+file. For example:
+
+```yaml
+worker_main_http_uri: http://127.0.0.1:8008
+```
+
+### Historical apps
+
+*Note:* Historically there used to be more apps, however they have been
+amalgamated into a single `synapse.app.generic_worker` app. The remaining apps
+are ones that do specific processing unrelated to requests, e.g. the `pusher`
+that handles sending out push notifications for new events. The intention is for
+all these to be folded into the `generic_worker` app and to use config to define
+which processes handle the various proccessing such as push notifications.
+
+
+## Migration from old config
+
+There are two main independent changes that have been made: introducing Redis
+support and merging apps into `synapse.app.generic_worker`. Both these changes
+are backwards compatible and so no changes to the config are required, however
+server admins are encouraged to plan to migrate to Redis as the old style direct
+TCP replication config is deprecated.
+
+To migrate to Redis add the `redis` config as above, and optionally remove the
+TCP `replication` listener from master and `worker_replication_port` from worker
+config.
+
+To migrate apps to use `synapse.app.generic_worker` simply update the
+`worker_app` option in the worker configs, and where worker are started (e.g.
+in systemd service files, but not required for synctl).
+
+
+## Architectural diagram
+
+The following shows an example setup using Redis and a reverse proxy:
+
+```
+                     Clients & Federation
+                              |
+                              v
+                        +-----------+
+                        |           |
+                        |  Reverse  |
+                        |  Proxy    |
+                        |           |
+                        +-----------+
+                            | | |
+                            | | | HTTP requests
+        +-------------------+ | +-----------+
+        |                 +---+             |
+        |                 |                 |
+        v                 v                 v
++--------------+  +--------------+  +--------------+  +--------------+
+|   Main       |  |   Generic    |  |   Generic    |  |  Event       |
+|   Process    |  |   Worker 1   |  |   Worker 2   |  |  Persister   |
++--------------+  +--------------+  +--------------+  +--------------+
+      ^    ^          |   ^   |         |   ^   |          ^    ^
+      |    |          |   |   |         |   |   |          |    |
+      |    |          |   |   |  HTTP   |   |   |          |    |
+      |    +----------+<--|---|---------+   |   |          |    |
+      |                   |   +-------------|-->+----------+    |
+      |                   |                 |                   |
+      |                   |                 |                   |
+      v                   v                 v                   v
+====================================================================
+                                                         Redis pub/sub channel
+```
diff --git a/docs/usage/configuration/workers/synctl_workers.md b/docs/usage/configuration/workers/synctl_workers.md
new file mode 100644
index 0000000000..15e37f608d
--- /dev/null
+++ b/docs/usage/configuration/workers/synctl_workers.md
@@ -0,0 +1,36 @@
+### Using synctl with workers
+
+If you want to use `synctl` to manage your synapse processes, you will need to
+create an an additional configuration file for the main synapse process. That
+configuration should look like this:
+
+```yaml
+worker_app: synapse.app.homeserver
+```
+
+Additionally, each worker app must be configured with the name of a "pid file",
+to which it will write its process ID when it starts. For example, for a
+synchrotron, you might write:
+
+```yaml
+worker_pid_file: /home/matrix/synapse/worker1.pid
+```
+
+Finally, to actually run your worker-based synapse, you must pass synctl the `-a`
+commandline option to tell it to operate on all the worker configurations found
+in the given directory, e.g.:
+
+```sh
+synctl -a $CONFIG/workers start
+```
+
+Currently one should always restart all workers when restarting or upgrading
+synapse, unless you explicitly know it's safe not to.  For instance, restarting
+synapse without restarting all the synchrotrons may result in broken typing
+notifications.
+
+To manipulate a specific worker, you pass the -w option to synctl:
+
+```sh
+synctl -w $CONFIG/workers/worker1.yaml restart
+```