summary refs log tree commit diff
path: root/docs
diff options
context:
space:
mode:
authorBen Banfield-Zanin <benbz@matrix.org>2020-06-25 08:33:23 +0100
committerBen Banfield-Zanin <benbz@matrix.org>2020-06-25 08:33:23 +0100
commit2e9f389fd2d4cd9acaf9f4a0ac5e3ea358bfc860 (patch)
tree6f6f2b3d1b0c40c16008fd34e3435881e353bda8 /docs
parentMerge remote-tracking branch 'origin/babolivier/info_mainline' into bbz/info-... (diff)
parentFix changelog wording (diff)
downloadsynapse-2e9f389fd2d4cd9acaf9f4a0ac5e3ea358bfc860.tar.xz
Merge remote-tracking branch 'origin/release-v1.15.1' into bbz/info-mainline-1.15
Diffstat (limited to 'docs')
-rw-r--r--docs/admin_api/README.rst18
-rw-r--r--docs/admin_api/delete_group.md4
-rw-r--r--docs/admin_api/media_admin_api.md6
-rw-r--r--docs/admin_api/purge_history_api.rst9
-rw-r--r--docs/admin_api/purge_remote_media.rst7
-rw-r--r--docs/admin_api/room_membership.md35
-rw-r--r--docs/admin_api/rooms.md161
-rw-r--r--docs/admin_api/user_admin_api.rst332
-rw-r--r--docs/application_services.md4
-rw-r--r--docs/dev/cas.md64
-rw-r--r--docs/dev/git.md148
-rw-r--r--docs/dev/git/branches.jpgbin0 -> 72228 bytes
-rw-r--r--docs/dev/git/clean.pngbin0 -> 110840 bytes
-rw-r--r--docs/dev/git/squash.pngbin0 -> 29667 bytes
-rw-r--r--docs/dev/saml.md8
-rw-r--r--docs/log_contexts.md5
-rw-r--r--docs/metrics-howto.md25
-rw-r--r--docs/openid.md206
-rw-r--r--docs/password_auth_providers.md6
-rw-r--r--docs/postgres.md73
-rw-r--r--docs/reverse_proxy.md151
-rw-r--r--docs/saml_mapping_providers.md77
-rw-r--r--docs/sample_config.yaml533
-rw-r--r--docs/spam_checker.md19
-rw-r--r--docs/sso_mapping_providers.md148
-rw-r--r--docs/systemd-with-workers/README.md67
-rw-r--r--docs/systemd-with-workers/system/matrix-synapse-worker@.service20
-rw-r--r--docs/systemd-with-workers/system/matrix-synapse.service21
-rw-r--r--docs/systemd-with-workers/system/matrix-synapse.target6
-rw-r--r--docs/systemd-with-workers/workers/federation_reader.yaml13
-rw-r--r--docs/tcp_replication.md95
-rw-r--r--docs/turn-howto.md66
-rw-r--r--docs/workers.md207
33 files changed, 2087 insertions, 447 deletions
diff --git a/docs/admin_api/README.rst b/docs/admin_api/README.rst
index 191806c5b4..9587bee0ce 100644
--- a/docs/admin_api/README.rst
+++ b/docs/admin_api/README.rst
@@ -4,17 +4,21 @@ Admin APIs
 This directory includes documentation for the various synapse specific admin
 APIs available.
 
-Only users that are server admins can use these APIs. A user can be marked as a
-server admin by updating the database directly, e.g.:
+Authenticating as a server admin
+--------------------------------
 
-``UPDATE users SET admin = 1 WHERE name = '@foo:bar.com'``
+Many of the API calls in the admin api will require an `access_token` for a
+server admin. (Note that a server admin is distinct from a room admin.)
 
-Restarting may be required for the changes to register.
+A user can be marked as a server admin by updating the database directly, e.g.:
 
-Using an admin access_token
-###########################
+.. code-block:: sql
+
+    UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
+
+A new server admin user can also be created using the
+``register_new_matrix_user`` script.
 
-Many of the API calls listed in the documentation here will require to include an admin `access_token`.
 Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings.
 
 Once you have your `access_token`, to include it in a request, the best option is to add the token to a request header:
diff --git a/docs/admin_api/delete_group.md b/docs/admin_api/delete_group.md
index 1710488ea8..c061678e75 100644
--- a/docs/admin_api/delete_group.md
+++ b/docs/admin_api/delete_group.md
@@ -4,11 +4,11 @@ This API lets a server admin delete a local group. Doing so will kick all
 users out of the group so that their clients will correctly handle the group
 being deleted.
 
-
 The API is:
 
 ```
 POST /_synapse/admin/v1/delete_group/<group_id>
 ```
 
-including an `access_token` of a server admin.
+To use it, you will need to authenticate by providing an `access_token` for a
+server admin: see [README.rst](README.rst).
diff --git a/docs/admin_api/media_admin_api.md b/docs/admin_api/media_admin_api.md
index 46ba7a1a71..26948770d8 100644
--- a/docs/admin_api/media_admin_api.md
+++ b/docs/admin_api/media_admin_api.md
@@ -6,9 +6,10 @@ The API is:
 ```
 GET /_synapse/admin/v1/room/<room_id>/media
 ```
-including an `access_token` of a server admin.
+To use it, you will need to authenticate by providing an `access_token` for a
+server admin: see [README.rst](README.rst).
 
-It returns a JSON body like the following:
+The API returns a JSON body like the following:
 ```
 {
     "local": [
@@ -99,4 +100,3 @@ Response:
   "num_quarantined": 10  # The number of media items successfully quarantined
 }
 ```
-
diff --git a/docs/admin_api/purge_history_api.rst b/docs/admin_api/purge_history_api.rst
index e2a620c54f..92cd05f2a0 100644
--- a/docs/admin_api/purge_history_api.rst
+++ b/docs/admin_api/purge_history_api.rst
@@ -15,7 +15,8 @@ The API is:
 
 ``POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>]``
 
-including an ``access_token`` of a server admin.
+To use it, you will need to authenticate by providing an ``access_token`` for a
+server admin: see `README.rst <README.rst>`_.
 
 By default, events sent by local users are not deleted, as they may represent
 the only copies of this content in existence. (Events sent by remote users are
@@ -54,8 +55,10 @@ It is possible to poll for updates on recent purges with a second API;
 
 ``GET /_synapse/admin/v1/purge_history_status/<purge_id>``
 
-(again, with a suitable ``access_token``). This API returns a JSON body like
-the following:
+Again, you will need to authenticate by providing an ``access_token`` for a
+server admin.
+
+This API returns a JSON body like the following:
 
 .. code:: json
 
diff --git a/docs/admin_api/purge_remote_media.rst b/docs/admin_api/purge_remote_media.rst
index dacd5bc8fb..00cb6b0589 100644
--- a/docs/admin_api/purge_remote_media.rst
+++ b/docs/admin_api/purge_remote_media.rst
@@ -6,12 +6,15 @@ media.
 
 The API is::
 
-    POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>&access_token=<access_token>
+    POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>
 
     {}
 
-Which will remove all cached media that was last accessed before
+\... which will remove all cached media that was last accessed before
 ``<unix_timestamp_in_ms>``.
 
+To use it, you will need to authenticate by providing an ``access_token`` for a
+server admin: see `README.rst <README.rst>`_.
+
 If the user re-requests purged remote media, synapse will re-request the media
 from the originating server.
diff --git a/docs/admin_api/room_membership.md b/docs/admin_api/room_membership.md
new file mode 100644
index 0000000000..b6746ff5e4
--- /dev/null
+++ b/docs/admin_api/room_membership.md
@@ -0,0 +1,35 @@
+# Edit Room Membership API
+
+This API allows an administrator to join an user account with a given `user_id`
+to a room with a given `room_id_or_alias`. You can only modify the membership of
+local users. The server administrator must be in the room and have permission to
+invite users.
+
+## Parameters
+
+The following parameters are available:
+
+* `user_id` - Fully qualified user: for example, `@user:server.com`.
+* `room_id_or_alias` - The room identifier or alias to join: for example,
+  `!636q39766251:server.com`.
+
+## Usage
+
+```
+POST /_synapse/admin/v1/join/<room_id_or_alias>
+
+{
+  "user_id": "@user:server.com"
+}
+```
+
+To use it, you will need to authenticate by providing an `access_token` for a
+server admin: see [README.rst](README.rst).
+
+Response:
+
+```
+{
+  "room_id": "!636q39766251:server.com"
+}
+```
diff --git a/docs/admin_api/rooms.md b/docs/admin_api/rooms.md
index 2db457c1b6..624e7745ba 100644
--- a/docs/admin_api/rooms.md
+++ b/docs/admin_api/rooms.md
@@ -11,8 +11,21 @@ The following query parameters are available:
 * `from` - Offset in the returned list. Defaults to `0`.
 * `limit` - Maximum amount of rooms to return. Defaults to `100`.
 * `order_by` - The method in which to sort the returned list of rooms. Valid values are:
-  - `alphabetical` - Rooms are ordered alphabetically by room name. This is the default.
-  - `size` - Rooms are ordered by the number of members. Largest to smallest.
+  - `alphabetical` - Same as `name`. This is deprecated.
+  - `size` - Same as `joined_members`. This is deprecated.
+  - `name` - Rooms are ordered alphabetically by room name. This is the default.
+  - `canonical_alias` - Rooms are ordered alphabetically by main alias address of the room.
+  - `joined_members` - Rooms are ordered by the number of members. Largest to smallest.
+  - `joined_local_members` - Rooms are ordered by the number of local members. Largest to smallest.
+  - `version` - Rooms are ordered by room version. Largest to smallest.
+  - `creator` - Rooms are ordered alphabetically by creator of the room.
+  - `encryption` - Rooms are ordered alphabetically by the end-to-end encryption algorithm.
+  - `federatable` - Rooms are ordered by whether the room is federatable.
+  - `public` - Rooms are ordered by visibility in room list.
+  - `join_rules` - Rooms are ordered alphabetically by join rules of the room.
+  - `guest_access` - Rooms are ordered alphabetically by guest access option of the room.
+  - `history_visibility` - Rooms are ordered alphabetically by visibility of history of the room.
+  - `state_events` - Rooms are ordered by number of state events. Largest to smallest.
 * `dir` - Direction of room order. Either `f` for forwards or `b` for backwards. Setting
           this value to `b` will reverse the above sort order. Defaults to `f`.
 * `search_term` - Filter rooms by their room name. Search term can be contained in any
@@ -26,6 +39,16 @@ The following fields are possible in the JSON response body:
     - `name` - The name of the room.
     - `canonical_alias` - The canonical (main) alias address of the room.
     - `joined_members` - How many users are currently in the room.
+    - `joined_local_members` - How many local users are currently in the room.
+    - `version` - The version of the room as a string.
+    - `creator` - The `user_id` of the room creator.
+    - `encryption` - Algorithm of end-to-end encryption of messages. Is `null` if encryption is not active.
+    - `federatable` - Whether users on other servers can join this room.
+    - `public` - Whether the room is visible in room directory.
+    - `join_rules` - The type of rules used for users wishing to join this room. One of: ["public", "knock", "invite", "private"].
+    - `guest_access` - Whether guests can join the room. One of: ["can_join", "forbidden"].
+    - `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].
+    - `state_events` - Total number of state_events of a room. Complexity of the room.
 * `offset` - The current pagination offset in rooms. This parameter should be
              used instead of `next_token` for room offset as `next_token` is
              not intended to be parsed.
@@ -60,14 +83,34 @@ Response:
       "room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
       "name": "Matrix HQ",
       "canonical_alias": "#matrix:matrix.org",
-      "joined_members": 8326
+      "joined_members": 8326,
+      "joined_local_members": 2,
+      "version": "1",
+      "creator": "@foo:matrix.org",
+      "encryption": null,
+      "federatable": true,
+      "public": true,
+      "join_rules": "invite",
+      "guest_access": null,
+      "history_visibility": "shared",
+      "state_events": 93534
     },
     ... (8 hidden items) ...
     {
       "room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
       "name": "This Week In Matrix (TWIM)",
       "canonical_alias": "#twim:matrix.org",
-      "joined_members": 314
+      "joined_members": 314,
+      "joined_local_members": 20,
+      "version": "4",
+      "creator": "@foo:matrix.org",
+      "encryption": "m.megolm.v1.aes-sha2",
+      "federatable": true,
+      "public": false,
+      "join_rules": "invite",
+      "guest_access": null,
+      "history_visibility": "shared",
+      "state_events": 8345
     }
   ],
   "offset": 0,
@@ -92,7 +135,17 @@ Response:
       "room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
       "name": "This Week In Matrix (TWIM)",
       "canonical_alias": "#twim:matrix.org",
-      "joined_members": 314
+      "joined_members": 314,
+      "joined_local_members": 20,
+      "version": "4",
+      "creator": "@foo:matrix.org",
+      "encryption": "m.megolm.v1.aes-sha2",
+      "federatable": true,
+      "public": false,
+      "join_rules": "invite",
+      "guest_access": null,
+      "history_visibility": "shared",
+      "state_events": 8
     }
   ],
   "offset": 0,
@@ -117,14 +170,34 @@ Response:
       "room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
       "name": "Matrix HQ",
       "canonical_alias": "#matrix:matrix.org",
-      "joined_members": 8326
+      "joined_members": 8326,
+      "joined_local_members": 2,
+      "version": "1",
+      "creator": "@foo:matrix.org",
+      "encryption": null,
+      "federatable": true,
+      "public": true,
+      "join_rules": "invite",
+      "guest_access": null,
+      "history_visibility": "shared",
+      "state_events": 93534
     },
     ... (98 hidden items) ...
     {
       "room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
       "name": "This Week In Matrix (TWIM)",
       "canonical_alias": "#twim:matrix.org",
-      "joined_members": 314
+      "joined_members": 314,
+      "joined_local_members": 20,
+      "version": "4",
+      "creator": "@foo:matrix.org",
+      "encryption": "m.megolm.v1.aes-sha2",
+      "federatable": true,
+      "public": false,
+      "join_rules": "invite",
+      "guest_access": null,
+      "history_visibility": "shared",
+      "state_events": 8345
     }
   ],
   "offset": 0,
@@ -154,6 +227,16 @@ Response:
       "name": "Music Theory",
       "canonical_alias": "#musictheory:matrix.org",
       "joined_members": 127
+      "joined_local_members": 2,
+      "version": "1",
+      "creator": "@foo:matrix.org",
+      "encryption": null,
+      "federatable": true,
+      "public": true,
+      "join_rules": "invite",
+      "guest_access": null,
+      "history_visibility": "shared",
+      "state_events": 93534
     },
     ... (48 hidden items) ...
     {
@@ -161,6 +244,16 @@ Response:
       "name": "weechat-matrix",
       "canonical_alias": "#weechat-matrix:termina.org.uk",
       "joined_members": 137
+      "joined_local_members": 20,
+      "version": "4",
+      "creator": "@foo:termina.org.uk",
+      "encryption": null,
+      "federatable": true,
+      "public": true,
+      "join_rules": "invite",
+      "guest_access": null,
+      "history_visibility": "shared",
+      "state_events": 8345
     }
   ],
   "offset": 100,
@@ -171,3 +264,57 @@ Response:
 
 Once the `next_token` parameter is no longer present, we know we've reached the
 end of the list.
+
+# DRAFT: Room Details API
+
+The Room Details admin API allows server admins to get all details of a room.
+
+This API is still a draft and details might change!
+
+The following fields are possible in the JSON response body:
+
+* `room_id` - The ID of the room.
+* `name` - The name of the room.
+* `canonical_alias` - The canonical (main) alias address of the room.
+* `joined_members` - How many users are currently in the room.
+* `joined_local_members` - How many local users are currently in the room.
+* `version` - The version of the room as a string.
+* `creator` - The `user_id` of the room creator.
+* `encryption` - Algorithm of end-to-end encryption of messages. Is `null` if encryption is not active.
+* `federatable` - Whether users on other servers can join this room.
+* `public` - Whether the room is visible in room directory.
+* `join_rules` - The type of rules used for users wishing to join this room. One of: ["public", "knock", "invite", "private"].
+* `guest_access` - Whether guests can join the room. One of: ["can_join", "forbidden"].
+* `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].
+* `state_events` - Total number of state_events of a room. Complexity of the room.
+
+## Usage
+
+A standard request:
+
+```
+GET /_synapse/admin/v1/rooms/<room_id>
+
+{}
+```
+
+Response:
+
+```
+{
+  "room_id": "!mscvqgqpHYjBGDxNym:matrix.org",
+  "name": "Music Theory",
+  "canonical_alias": "#musictheory:matrix.org",
+  "joined_members": 127
+  "joined_local_members": 2,
+  "version": "1",
+  "creator": "@foo:matrix.org",
+  "encryption": null,
+  "federatable": true,
+  "public": true,
+  "join_rules": "invite",
+  "guest_access": null,
+  "history_visibility": "shared",
+  "state_events": 93534
+}
+```
diff --git a/docs/admin_api/user_admin_api.rst b/docs/admin_api/user_admin_api.rst
index 9ce10119ff..7b030a6285 100644
--- a/docs/admin_api/user_admin_api.rst
+++ b/docs/admin_api/user_admin_api.rst
@@ -1,9 +1,47 @@
+.. contents::
+
+Query User Account
+==================
+
+This API returns information about a specific user account.
+
+The api is::
+
+    GET /_synapse/admin/v2/users/<user_id>
+
+To use it, you will need to authenticate by providing an ``access_token`` for a
+server admin: see `README.rst <README.rst>`_.
+
+It returns a JSON body like the following:
+
+.. code:: json
+
+    {
+        "displayname": "User",
+        "threepids": [
+            {
+                "medium": "email",
+                "address": "<user_mail_1>"
+            },
+            {
+                "medium": "email",
+                "address": "<user_mail_2>"
+            }
+        ],
+        "avatar_url": "<avatar_url>",
+        "admin": false,
+        "deactivated": false
+    }
+
+URL parameters:
+
+- ``user_id``: fully-qualified user id: for example, ``@user:server.com``.
+
 Create or modify Account
 ========================
 
 This API allows an administrator to create or modify a user account with a
-specific ``user_id``. Be aware that ``user_id`` is fully qualified: for example,
-``@user:server.com``.
+specific ``user_id``.
 
 This api is::
 
@@ -31,14 +69,30 @@ with a body of:
         "deactivated": false
     }
 
-including an ``access_token`` of a server admin.
+To use it, you will need to authenticate by providing an ``access_token`` for a
+server admin: see `README.rst <README.rst>`_.
+
+URL parameters:
+
+- ``user_id``: fully-qualified user id: for example, ``@user:server.com``.
+
+Body parameters:
+
+- ``password``, optional. If provided, the user's password is updated and all
+  devices are logged out.
+
+- ``displayname``, optional, defaults to the value of ``user_id``.
+
+- ``threepids``, optional, allows setting the third-party IDs (email, msisdn)
+  belonging to a user.
+
+- ``avatar_url``, optional, must be a
+  `MXC URI <https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris>`_.
+
+- ``admin``, optional, defaults to ``false``.
+
+- ``deactivated``, optional, defaults to ``false``.
 
-The parameter ``displayname`` is optional and defaults to ``user_id``.
-The parameter ``threepids`` is optional.
-The parameter ``avatar_url`` is optional.
-The parameter ``admin`` is optional and defaults to 'false'.
-The parameter ``deactivated`` is optional and defaults to 'false'.
-The parameter ``password`` is optional. If provided the user's password is updated and all devices are logged out.
 If the user already exists then optional parameters default to the current value.
 
 List Accounts
@@ -50,17 +104,27 @@ The api is::
 
     GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
 
-including an ``access_token`` of a server admin.
-The parameters ``from`` and ``limit`` are required only for pagination.
-By default, a ``limit`` of 100 is used.
-The parameter ``user_id`` can be used to select only users with user ids that
-contain this value.
-The parameter ``guests=false`` can be used to exclude guest users,
-default is to include guest users.
-The parameter ``deactivated=true`` can be used to include deactivated users,
-default is to exclude deactivated users.
-If the endpoint does not return a ``next_token`` then there are no more users left.
-It returns a JSON body like the following:
+To use it, you will need to authenticate by providing an `access_token` for a
+server admin: see `README.rst <README.rst>`_.
+
+The parameter ``from`` is optional but used for pagination, denoting the
+offset in the returned results. This should be treated as an opaque value and
+not explicitly set to anything other than the return value of ``next_token``
+from a previous call.
+
+The parameter ``limit`` is optional but is used for pagination, denoting the
+maximum number of items to return in this call. Defaults to ``100``.
+
+The parameter ``user_id`` is optional and filters to only users with user IDs
+that contain this value.
+
+The parameter ``guests`` is optional and if ``false`` will **exclude** guest users.
+Defaults to ``true`` to include guest users.
+
+The parameter ``deactivated`` is optional and if ``true`` will **include** deactivated users.
+Defaults to ``false`` to exclude deactivated users.
+
+A JSON body is returned with the following shape:
 
 .. code:: json
 
@@ -72,31 +136,41 @@ It returns a JSON body like the following:
                 "is_guest": 0,
                 "admin": 0,
                 "user_type": null,
-                "deactivated": 0
+                "deactivated": 0,
+                "displayname": "<User One>",
+                "avatar_url": null
             }, {
                 "name": "<user_id2>",
                 "password_hash": "<password_hash2>",
                 "is_guest": 0,
                 "admin": 1,
                 "user_type": null,
-                "deactivated": 0
+                "deactivated": 0,
+                "displayname": "<User Two>",
+                "avatar_url": "<avatar_url>"
             }
         ],
-        "next_token": "100"
+        "next_token": "100",
+        "total": 200
     }
 
+To paginate, check for ``next_token`` and if present, call the endpoint again
+with ``from`` set to the value of ``next_token``. This will return a new page.
 
-Query Account
-=============
+If the endpoint does not return a ``next_token`` then there are no more users
+to paginate through.
 
-This API returns information about a specific user account.
+Query current sessions for a user
+=================================
+
+This API returns information about the active sessions for a specific user.
 
 The api is::
 
-    GET /_synapse/admin/v1/whois/<user_id> (deprecated)
-    GET /_synapse/admin/v2/users/<user_id>
+    GET /_synapse/admin/v1/whois/<user_id>
 
-including an ``access_token`` of a server admin.
+To use it, you will need to authenticate by providing an ``access_token`` for a
+server admin: see `README.rst <README.rst>`_.
 
 It returns a JSON body like the following:
 
@@ -149,9 +223,10 @@ with a body of:
         "erase": true
     }
 
-including an ``access_token`` of a server admin.
+To use it, you will need to authenticate by providing an ``access_token`` for a
+server admin: see `README.rst <README.rst>`_.
 
-The erase parameter is optional and defaults to 'false'.
+The erase parameter is optional and defaults to ``false``.
 An empty body may be passed for backwards compatibility.
 
 
@@ -173,7 +248,8 @@ with a body of:
        "logout_devices": true,
    }
 
-including an ``access_token`` of a server admin.
+To use it, you will need to authenticate by providing an ``access_token`` for a
+server admin: see `README.rst <README.rst>`_.
 
 The parameter ``new_password`` is required.
 The parameter ``logout_devices`` is optional and defaults to ``true``.
@@ -186,7 +262,8 @@ The api is::
 
     GET /_synapse/admin/v1/users/<user_id>/admin
 
-including an ``access_token`` of a server admin.
+To use it, you will need to authenticate by providing an ``access_token`` for a
+server admin: see `README.rst <README.rst>`_.
 
 A response body like the following is returned:
 
@@ -214,4 +291,191 @@ with a body of:
         "admin": true
     }
 
-including an ``access_token`` of a server admin.
+To use it, you will need to authenticate by providing an ``access_token`` for a
+server admin: see `README.rst <README.rst>`_.
+
+
+User devices
+============
+
+List all devices
+----------------
+Gets information about all devices for a specific ``user_id``.
+
+The API is::
+
+  GET /_synapse/admin/v2/users/<user_id>/devices
+
+To use it, you will need to authenticate by providing an ``access_token`` for a
+server admin: see `README.rst <README.rst>`_.
+
+A response body like the following is returned:
+
+.. code:: json
+
+    {
+      "devices": [
+        {
+          "device_id": "QBUAZIFURK",
+          "display_name": "android",
+          "last_seen_ip": "1.2.3.4",
+          "last_seen_ts": 1474491775024,
+          "user_id": "<user_id>"
+        },
+        {
+          "device_id": "AUIECTSRND",
+          "display_name": "ios",
+          "last_seen_ip": "1.2.3.5",
+          "last_seen_ts": 1474491775025,
+          "user_id": "<user_id>"
+        }
+      ]
+    }
+
+**Parameters**
+
+The following parameters should be set in the URL:
+
+- ``user_id`` - fully qualified: for example, ``@user:server.com``.
+
+**Response**
+
+The following fields are returned in the JSON response body:
+
+- ``devices`` - An array of objects, each containing information about a device.
+  Device objects contain the following fields:
+
+  - ``device_id`` - Identifier of device.
+  - ``display_name`` - Display name set by the user for this device.
+    Absent if no name has been set.
+  - ``last_seen_ip`` - The IP address where this device was last seen.
+    (May be a few minutes out of date, for efficiency reasons).
+  - ``last_seen_ts`` - The timestamp (in milliseconds since the unix epoch) when this
+    devices was last seen. (May be a few minutes out of date, for efficiency reasons).
+  - ``user_id`` - Owner of  device.
+
+Delete multiple devices
+------------------
+Deletes the given devices for a specific ``user_id``, and invalidates
+any access token associated with them.
+
+The API is::
+
+    POST /_synapse/admin/v2/users/<user_id>/delete_devices
+
+    {
+      "devices": [
+        "QBUAZIFURK",
+        "AUIECTSRND"
+      ],
+    }
+
+To use it, you will need to authenticate by providing an ``access_token`` for a
+server admin: see `README.rst <README.rst>`_.
+
+An empty JSON dict is returned.
+
+**Parameters**
+
+The following parameters should be set in the URL:
+
+- ``user_id`` - fully qualified: for example, ``@user:server.com``.
+
+The following fields are required in the JSON request body:
+
+- ``devices`` - The list of device IDs to delete.
+
+Show a device
+---------------
+Gets information on a single device, by ``device_id`` for a specific ``user_id``.
+
+The API is::
+
+    GET /_synapse/admin/v2/users/<user_id>/devices/<device_id>
+
+To use it, you will need to authenticate by providing an ``access_token`` for a
+server admin: see `README.rst <README.rst>`_.
+
+A response body like the following is returned:
+
+.. code:: json
+
+    {
+      "device_id": "<device_id>",
+      "display_name": "android",
+      "last_seen_ip": "1.2.3.4",
+      "last_seen_ts": 1474491775024,
+      "user_id": "<user_id>"
+    }
+
+**Parameters**
+
+The following parameters should be set in the URL:
+
+- ``user_id`` - fully qualified: for example, ``@user:server.com``.
+- ``device_id`` - The device to retrieve.
+
+**Response**
+
+The following fields are returned in the JSON response body:
+
+- ``device_id`` - Identifier of device.
+- ``display_name`` - Display name set by the user for this device.
+  Absent if no name has been set.
+- ``last_seen_ip`` - The IP address where this device was last seen.
+  (May be a few minutes out of date, for efficiency reasons).
+- ``last_seen_ts`` - The timestamp (in milliseconds since the unix epoch) when this
+  devices was last seen. (May be a few minutes out of date, for efficiency reasons).
+- ``user_id`` - Owner of  device.
+
+Update a device
+---------------
+Updates the metadata on the given ``device_id`` for a specific ``user_id``.
+
+The API is::
+
+    PUT /_synapse/admin/v2/users/<user_id>/devices/<device_id>
+
+    {
+      "display_name": "My other phone"
+    }
+
+To use it, you will need to authenticate by providing an ``access_token`` for a
+server admin: see `README.rst <README.rst>`_.
+
+An empty JSON dict is returned.
+
+**Parameters**
+
+The following parameters should be set in the URL:
+
+- ``user_id`` - fully qualified: for example, ``@user:server.com``.
+- ``device_id`` - The device to update.
+
+The following fields are required in the JSON request body:
+
+- ``display_name`` - The new display name for this device. If not given,
+  the display name is unchanged.
+
+Delete a device
+---------------
+Deletes the given ``device_id`` for a specific ``user_id``,
+and invalidates any access token associated with it.
+
+The API is::
+
+    DELETE /_synapse/admin/v2/users/<user_id>/devices/<device_id>
+
+    {}
+
+To use it, you will need to authenticate by providing an ``access_token`` for a
+server admin: see `README.rst <README.rst>`_.
+
+An empty JSON dict is returned.
+
+**Parameters**
+
+The following parameters should be set in the URL:
+
+- ``user_id`` - fully qualified: for example, ``@user:server.com``.
+- ``device_id`` - The device to delete.
diff --git a/docs/application_services.md b/docs/application_services.md
index 06cb79f1f9..e4592010a2 100644
--- a/docs/application_services.md
+++ b/docs/application_services.md
@@ -23,9 +23,13 @@ namespaces:
   users:  # List of users we're interested in
     - exclusive: <bool>
       regex: <regex>
+      group_id: <group>
     - ...
   aliases: []  # List of aliases we're interested in
   rooms: [] # List of room ids we're interested in
 ```
 
+`exclusive`: If enabled, only this application service is allowed to register users in its namespace(s).
+`group_id`: All users of this application service are dynamically joined to this group. This is useful for e.g user organisation or flairs.
+
 See the [spec](https://matrix.org/docs/spec/application_service/unstable.html) for further details on how application services work.
diff --git a/docs/dev/cas.md b/docs/dev/cas.md
new file mode 100644
index 0000000000..f8d02cc82c
--- /dev/null
+++ b/docs/dev/cas.md
@@ -0,0 +1,64 @@
+# How to test CAS as a developer without a server
+
+The [django-mama-cas](https://github.com/jbittel/django-mama-cas) project is an
+easy to run CAS implementation built on top of Django.
+
+## Prerequisites
+
+1. Create a new virtualenv: `python3 -m venv <your virtualenv>`
+2. Activate your virtualenv: `source /path/to/your/virtualenv/bin/activate`
+3. Install Django and django-mama-cas:
+   ```
+   python -m pip install "django<3" "django-mama-cas==2.4.0"
+   ```
+4. Create a Django project in the current directory:
+   ```
+   django-admin startproject cas_test .
+   ```
+5. Follow the [install directions](https://django-mama-cas.readthedocs.io/en/latest/installation.html#configuring) for django-mama-cas
+6. Setup the SQLite database: `python manage.py migrate`
+7. Create a user:
+   ```
+   python manage.py createsuperuser
+   ```
+   1. Use whatever you want as the username and password.
+   2. Leave the other fields blank.
+8. Use the built-in Django test server to serve the CAS endpoints on port 8000:
+   ```
+   python manage.py runserver
+   ```
+
+You should now have a Django project configured to serve CAS authentication with
+a single user created.
+
+## Configure Synapse (and Riot) to use CAS
+
+1. Modify your `homeserver.yaml` to enable CAS and point it to your locally
+   running Django test server:
+   ```yaml
+   cas_config:
+     enabled: true
+     server_url: "http://localhost:8000"
+     service_url: "http://localhost:8081"
+     #displayname_attribute: name
+     #required_attributes:
+     #    name: value
+   ```
+2. Restart Synapse.
+
+Note that the above configuration assumes the homeserver is running on port 8081
+and that the CAS server is on port 8000, both on localhost.
+
+## Testing the configuration
+
+Then in Riot:
+
+1. Visit the login page with a Riot pointing at your homeserver.
+2. Click the Single Sign-On button.
+3. Login using the credentials created with `createsuperuser`.
+4. You should be logged in.
+
+If you want to repeat this process you'll need to manually logout first:
+
+1. http://localhost:8000/admin/
+2. Click "logout" in the top right.
diff --git a/docs/dev/git.md b/docs/dev/git.md
new file mode 100644
index 0000000000..b747ff20c9
--- /dev/null
+++ b/docs/dev/git.md
@@ -0,0 +1,148 @@
+Some notes on how we use git
+============================
+
+On keeping the commit history clean
+-----------------------------------
+
+In an ideal world, our git commit history would be a linear progression of
+commits each of which contains a single change building on what came
+before. Here, by way of an arbitrary example, is the top of `git log --graph
+b2dba0607`:
+
+<img src="git/clean.png" alt="clean git graph" width="500px">
+
+Note how the commit comment explains clearly what is changing and why. Also
+note the *absence* of merge commits, as well as the absence of commits called
+things like (to pick a few culprits):
+[“pep8”](https://github.com/matrix-org/synapse/commit/84691da6c), [“fix broken
+test”](https://github.com/matrix-org/synapse/commit/474810d9d),
+[“oops”](https://github.com/matrix-org/synapse/commit/c9d72e457),
+[“typo”](https://github.com/matrix-org/synapse/commit/836358823), or [“Who's
+the president?”](https://github.com/matrix-org/synapse/commit/707374d5d).
+
+There are a number of reasons why keeping a clean commit history is a good
+thing:
+
+ * From time to time, after a change lands, it turns out to be necessary to
+   revert it, or to backport it to a release branch. Those operations are
+   *much* easier when the change is contained in a single commit.
+
+ * Similarly, it's much easier to answer questions like “is the fix for
+   `/publicRooms` on the release branch?” if that change consists of a single
+   commit.
+
+ * Likewise: “what has changed on this branch in the last week?” is much
+   clearer without merges and “pep8” commits everywhere.
+
+ * Sometimes we need to figure out where a bug got introduced, or some
+   behaviour changed. One way of doing that is with `git bisect`: pick an
+   arbitrary commit between the known good point and the known bad point, and
+   see how the code behaves. However, that strategy fails if the commit you
+   chose is the middle of someone's epic branch in which they broke the world
+   before putting it back together again.
+
+One counterargument is that it is sometimes useful to see how a PR evolved as
+it went through review cycles. This is true, but that information is always
+available via the GitHub UI (or via the little-known [refs/pull
+namespace](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/checking-out-pull-requests-locally)).
+
+
+Of course, in reality, things are more complicated than that. We have release
+branches as well as `develop` and `master`, and we deliberately merge changes
+between them. Bugs often slip through and have to be fixed later. That's all
+fine: this not a cast-iron rule which must be obeyed, but an ideal to aim
+towards.
+
+Merges, squashes, rebases: wtf?
+-------------------------------
+
+Ok, so that's what we'd like to achieve. How do we achieve it?
+
+The TL;DR is: when you come to merge a pull request, you *probably* want to
+“squash and merge”:
+
+![squash and merge](git/squash.png).
+
+(This applies whether you are merging your own PR, or that of another
+contributor.)
+
+“Squash and merge”<sup id="a1">[1](#f1)</sup> takes all of the changes in the
+PR, and bundles them into a single commit. GitHub gives you the opportunity to
+edit the commit message before you confirm, and normally you should do so,
+because the default will be useless (again: `* woops typo` is not a useful
+thing to keep in the historical record).
+
+The main problem with this approach comes when you have a series of pull
+requests which build on top of one another: as soon as you squash-merge the
+first PR, you'll end up with a stack of conflicts to resolve in all of the
+others. In general, it's best to avoid this situation in the first place by
+trying not to have multiple related PRs in flight at the same time. Still,
+sometimes that's not possible and doing a regular merge is the lesser evil.
+
+Another occasion in which a regular merge makes more sense is a PR where you've
+deliberately created a series of commits each of which makes sense in its own
+right. For example: [a PR which gradually propagates a refactoring operation
+through the codebase](https://github.com/matrix-org/synapse/pull/6837), or [a
+PR which is the culmination of several other
+PRs](https://github.com/matrix-org/synapse/pull/5987). In this case the ability
+to figure out when a particular change/bug was introduced could be very useful.
+
+Ultimately: **this is not a hard-and-fast-rule**. If in doubt, ask yourself “do
+each of the commits I am about to merge make sense in their own right”, but
+remember that we're just doing our best to balance “keeping the commit history
+clean” with other factors.
+
+Git branching model
+-------------------
+
+A [lot](https://nvie.com/posts/a-successful-git-branching-model/)
+[of](http://scottchacon.com/2011/08/31/github-flow.html)
+[words](https://www.endoflineblog.com/gitflow-considered-harmful) have been
+written in the past about git branching models (no really, [a
+lot](https://martinfowler.com/articles/branching-patterns.html)). I tend to
+think the whole thing is overblown. Fundamentally, it's not that
+complicated. Here's how we do it.
+
+Let's start with a picture:
+
+![branching model](git/branches.jpg)
+
+It looks complicated, but it's really not. There's one basic rule: *anyone* is
+free to merge from *any* more-stable branch to *any* less-stable branch at
+*any* time<sup id="a2">[2](#f2)</sup>. (The principle behind this is that if a
+change is good enough for the more-stable branch, then it's also good enough go
+put in a less-stable branch.)
+
+Meanwhile, merging (or squashing, as per the above) from a less-stable to a
+more-stable branch is a deliberate action in which you want to publish a change
+or a set of changes to (some subset of) the world: for example, this happens
+when a PR is landed, or as part of our release process.
+
+So, what counts as a more- or less-stable branch? A little reflection will show
+that our active branches are ordered thus, from more-stable to less-stable:
+
+ * `master` (tracks our last release).
+ * `release-vX.Y.Z` (the branch where we prepare the next release)<sup
+   id="a3">[3](#f3)</sup>.
+ * PR branches which are targeting the release.
+ * `develop` (our "mainline" branch containing our bleeding-edge).
+ * regular PR branches.
+
+The corollary is: if you have a bugfix that needs to land in both
+`release-vX.Y.Z` *and* `develop`, then you should base your PR on
+`release-vX.Y.Z`, get it merged there, and then merge from `release-vX.Y.Z` to
+`develop`. (If a fix lands in `develop` and we later need it in a
+release-branch, we can of course cherry-pick it, but landing it in the release
+branch first helps reduce the chance of annoying conflicts.)
+
+---
+
+<b id="f1">[1]</b>: “Squash and merge” is GitHub's term for this
+operation. Given that there is no merge involved, I'm not convinced it's the
+most intuitive name. [^](#a1)
+
+<b id="f2">[2]</b>: Well, anyone with commit access.[^](#a2)
+
+<b id="f3">[3]</b>: Very, very occasionally (I think this has happened once in
+the history of Synapse), we've had two releases in flight at once. Obviously,
+`release-v1.2.3` is more-stable than `release-v1.3.0`. [^](#a3)
diff --git a/docs/dev/git/branches.jpg b/docs/dev/git/branches.jpg
new file mode 100644
index 0000000000..715ecc8cd0
--- /dev/null
+++ b/docs/dev/git/branches.jpg
Binary files differdiff --git a/docs/dev/git/clean.png b/docs/dev/git/clean.png
new file mode 100644
index 0000000000..3accd7ccef
--- /dev/null
+++ b/docs/dev/git/clean.png
Binary files differdiff --git a/docs/dev/git/squash.png b/docs/dev/git/squash.png
new file mode 100644
index 0000000000..234caca3e4
--- /dev/null
+++ b/docs/dev/git/squash.png
Binary files differdiff --git a/docs/dev/saml.md b/docs/dev/saml.md
index f41aadce47..a9bfd2dc05 100644
--- a/docs/dev/saml.md
+++ b/docs/dev/saml.md
@@ -18,9 +18,13 @@ To make Synapse (and therefore Riot) use it:
        metadata:
          local: ["samling.xml"]   
    ```
-5. Run `apt-get install xmlsec1` and `pip install --upgrade --force 'pysaml2>=4.5.0'` to ensure
+5. Ensure that your `homeserver.yaml` has a setting for `public_baseurl`:
+   ```yaml
+   public_baseurl: http://localhost:8080/
+   ```
+6. Run `apt-get install xmlsec1` and `pip install --upgrade --force 'pysaml2>=4.5.0'` to ensure
    the dependencies are installed and ready to go.
-6. Restart Synapse.
+7. Restart Synapse.
 
 Then in Riot:
 
diff --git a/docs/log_contexts.md b/docs/log_contexts.md
index 5331e8c88b..fe30ca2791 100644
--- a/docs/log_contexts.md
+++ b/docs/log_contexts.md
@@ -29,14 +29,13 @@ from synapse.logging import context         # omitted from future snippets
 def handle_request(request_id):
     request_context = context.LoggingContext()
 
-    calling_context = context.LoggingContext.current_context()
-    context.LoggingContext.set_current_context(request_context)
+    calling_context = context.set_current_context(request_context)
     try:
         request_context.request = request_id
         do_request_handling()
         logger.debug("finished")
     finally:
-        context.LoggingContext.set_current_context(calling_context)
+        context.set_current_context(calling_context)
 
 def do_request_handling():
     logger.debug("phew")  # this will be logged against request_id
diff --git a/docs/metrics-howto.md b/docs/metrics-howto.md
index 32abb9f44e..cf69938a2a 100644
--- a/docs/metrics-howto.md
+++ b/docs/metrics-howto.md
@@ -60,6 +60,31 @@
 
 1.  Restart Prometheus.
 
+## Monitoring workers
+
+To monitor a Synapse installation using
+[workers](https://github.com/matrix-org/synapse/blob/master/docs/workers.md),
+every worker needs to be monitored independently, in addition to
+the main homeserver process. This is because workers don't send
+their metrics to the main homeserver process, but expose them
+directly (if they are configured to do so).
+
+To allow collecting metrics from a worker, you need to add a
+`metrics` listener to its configuration, by adding the following
+under `worker_listeners`:
+
+```yaml
+ - type: metrics
+   bind_address: ''
+   port: 9101
+```
+
+The `bind_address` and `port` parameters should be set so that
+the resulting listener can be reached by prometheus, and they
+don't clash with an existing worker.
+With this example, the worker's metrics would then be available
+on `http://127.0.0.1:9101`.
+
 ## Renaming of metrics & deprecation of old names in 1.2
 
 Synapse 1.2 updates the Prometheus metrics to match the naming
diff --git a/docs/openid.md b/docs/openid.md
new file mode 100644
index 0000000000..688379ddd9
--- /dev/null
+++ b/docs/openid.md
@@ -0,0 +1,206 @@
+# Configuring Synapse to authenticate against an OpenID Connect provider
+
+Synapse can be configured to use an OpenID Connect Provider (OP) for
+authentication, instead of its own local password database.
+
+Any OP should work with Synapse, as long as it supports the authorization code
+flow. There are a few options for that:
+
+ - start a local OP. Synapse has been tested with [Hydra][hydra] and
+   [Dex][dex-idp].  Note that for an OP to work, it should be served under a
+   secure (HTTPS) origin.  A certificate signed with a self-signed, locally
+   trusted CA should work. In that case, start Synapse with a `SSL_CERT_FILE`
+   environment variable set to the path of the CA.
+
+ - set up a SaaS OP, like [Google][google-idp], [Auth0][auth0] or
+   [Okta][okta]. Synapse has been tested with Auth0 and Google.
+
+It may also be possible to use other OAuth2 providers which provide the
+[authorization code grant type](https://tools.ietf.org/html/rfc6749#section-4.1),
+such as [Github][github-idp].
+
+[google-idp]: https://developers.google.com/identity/protocols/oauth2/openid-connect
+[auth0]: https://auth0.com/
+[okta]: https://www.okta.com/
+[dex-idp]: https://github.com/dexidp/dex
+[hydra]: https://www.ory.sh/docs/hydra/
+[github-idp]: https://developer.github.com/apps/building-oauth-apps/authorizing-oauth-apps
+
+## Preparing Synapse
+
+The OpenID integration in Synapse uses the
+[`authlib`](https://pypi.org/project/Authlib/) library, which must be installed
+as follows:
+
+ * The relevant libraries are included in the Docker images and Debian packages
+   provided by `matrix.org` so no further action is needed.
+
+ * If you installed Synapse into a virtualenv, run `/path/to/env/bin/pip
+   install synapse[oidc]` to install the necessary dependencies.
+
+ * For other installation mechanisms, see the documentation provided by the
+   maintainer.
+
+To enable the OpenID integration, you should then add an `oidc_config` section
+to your configuration file (or uncomment the `enabled: true` line in the
+existing section). See [sample_config.yaml](./sample_config.yaml) for some
+sample settings, as well as the text below for example configurations for
+specific providers.
+
+## Sample configs
+
+Here are a few configs for providers that should work with Synapse.
+
+### [Dex][dex-idp]
+
+[Dex][dex-idp] is a simple, open-source, certified OpenID Connect Provider.
+Although it is designed to help building a full-blown provider with an
+external database, it can be configured with static passwords in a config file.
+
+Follow the [Getting Started
+guide](https://github.com/dexidp/dex/blob/master/Documentation/getting-started.md)
+to install Dex.
+
+Edit `examples/config-dev.yaml` config file from the Dex repo to add a client:
+
+```yaml
+staticClients:
+- id: synapse
+  secret: secret
+  redirectURIs:
+  - '[synapse public baseurl]/_synapse/oidc/callback'
+  name: 'Synapse'
+```
+
+Run with `dex serve examples/config-dex.yaml`.
+
+Synapse config:
+
+```yaml
+oidc_config:
+   enabled: true
+   skip_verification: true # This is needed as Dex is served on an insecure endpoint
+   issuer: "http://127.0.0.1:5556/dex"
+   client_id: "synapse"
+   client_secret: "secret"
+   scopes: ["openid", "profile"]
+   user_mapping_provider:
+     config:
+       localpart_template: "{{ user.name }}"
+       display_name_template: "{{ user.name|capitalize }}"
+```
+
+### [Auth0][auth0]
+
+1. Create a regular web application for Synapse
+2. Set the Allowed Callback URLs to `[synapse public baseurl]/_synapse/oidc/callback`
+3. Add a rule to add the `preferred_username` claim.
+   <details>
+    <summary>Code sample</summary>
+
+    ```js
+    function addPersistenceAttribute(user, context, callback) {
+      user.user_metadata = user.user_metadata || {};
+      user.user_metadata.preferred_username = user.user_metadata.preferred_username || user.user_id;
+      context.idToken.preferred_username = user.user_metadata.preferred_username;
+
+      auth0.users.updateUserMetadata(user.user_id, user.user_metadata)
+        .then(function(){
+            callback(null, user, context);
+        })
+        .catch(function(err){
+            callback(err);
+        });
+    }
+    ```
+  </details>
+
+Synapse config:
+
+```yaml
+oidc_config:
+   enabled: true
+   issuer: "https://your-tier.eu.auth0.com/" # TO BE FILLED
+   client_id: "your-client-id" # TO BE FILLED
+   client_secret: "your-client-secret" # TO BE FILLED
+   scopes: ["openid", "profile"]
+   user_mapping_provider:
+     config:
+       localpart_template: "{{ user.preferred_username }}"
+       display_name_template: "{{ user.name }}"
+```
+
+### GitHub
+
+GitHub is a bit special as it is not an OpenID Connect compliant provider, but
+just a regular OAuth2 provider.
+
+The [`/user` API endpoint](https://developer.github.com/v3/users/#get-the-authenticated-user)
+can be used to retrieve information on the authenticated user. As the Synaspse
+login mechanism needs an attribute to uniquely identify users, and that endpoint
+does not return a `sub` property, an alternative `subject_claim` has to be set.
+
+1. Create a new OAuth application: https://github.com/settings/applications/new.
+2. Set the callback URL to `[synapse public baseurl]/_synapse/oidc/callback`.
+
+Synapse config:
+
+```yaml
+oidc_config:
+   enabled: true
+   discover: false
+   issuer: "https://github.com/"
+   client_id: "your-client-id" # TO BE FILLED
+   client_secret: "your-client-secret" # TO BE FILLED
+   authorization_endpoint: "https://github.com/login/oauth/authorize"
+   token_endpoint: "https://github.com/login/oauth/access_token"
+   userinfo_endpoint: "https://api.github.com/user"
+   scopes: ["read:user"]
+   user_mapping_provider:
+     config:
+       subject_claim: "id"
+       localpart_template: "{{ user.login }}"
+       display_name_template: "{{ user.name }}"
+```
+
+### [Google][google-idp]
+
+1. Set up a project in the Google API Console (see
+   https://developers.google.com/identity/protocols/oauth2/openid-connect#appsetup).
+2. add an "OAuth Client ID" for a Web Application under "Credentials".
+3. Copy the Client ID and Client Secret, and add the following to your synapse config:
+   ```yaml
+   oidc_config:
+     enabled: true
+     issuer: "https://accounts.google.com/"
+     client_id: "your-client-id" # TO BE FILLED
+     client_secret: "your-client-secret" # TO BE FILLED
+     scopes: ["openid", "profile"]
+     user_mapping_provider:
+       config:
+         localpart_template: "{{ user.given_name|lower }}"
+         display_name_template: "{{ user.name }}"
+   ```
+4. Back in the Google console, add this Authorized redirect URI: `[synapse
+   public baseurl]/_synapse/oidc/callback`.
+
+### Twitch
+
+1. Setup a developer account on [Twitch](https://dev.twitch.tv/)
+2. Obtain the OAuth 2.0 credentials by [creating an app](https://dev.twitch.tv/console/apps/)
+3. Add this OAuth Redirect URL: `[synapse public baseurl]/_synapse/oidc/callback`
+
+Synapse config:
+
+```yaml
+oidc_config:
+   enabled: true
+   issuer: "https://id.twitch.tv/oauth2/"
+   client_id: "your-client-id" # TO BE FILLED
+   client_secret: "your-client-secret" # TO BE FILLED
+   client_auth_method: "client_secret_post"
+   user_mapping_provider:
+     config:
+       localpart_template: '{{ user.preferred_username }}'
+       display_name_template: '{{ user.name }}'
+```
diff --git a/docs/password_auth_providers.md b/docs/password_auth_providers.md
index 0db1a3804a..5d9ae67041 100644
--- a/docs/password_auth_providers.md
+++ b/docs/password_auth_providers.md
@@ -9,7 +9,11 @@ into Synapse, and provides a number of methods by which it can integrate
 with the authentication system.
 
 This document serves as a reference for those looking to implement their
-own password auth providers.
+own password auth providers. Additionally, here is a list of known
+password auth provider module implementations:
+
+* [matrix-synapse-ldap3](https://github.com/matrix-org/matrix-synapse-ldap3/)
+* [matrix-synapse-shared-secret-auth](https://github.com/devture/matrix-synapse-shared-secret-auth)
 
 ## Required methods
 
diff --git a/docs/postgres.md b/docs/postgres.md
index e0793ecee8..70fe29cdcc 100644
--- a/docs/postgres.md
+++ b/docs/postgres.md
@@ -61,7 +61,33 @@ Note that the PostgreSQL database *must* have the correct encoding set
 
 You may need to enable password authentication so `synapse_user` can
 connect to the database. See
-<https://www.postgresql.org/docs/11/auth-pg-hba-conf.html>.
+<https://www.postgresql.org/docs/current/auth-pg-hba-conf.html>.
+
+If you get an error along the lines of `FATAL:  Ident authentication failed for
+user "synapse_user"`, you may need to use an authentication method other than
+`ident`:
+
+* If the `synapse_user` user has a password, add the password to the `database:`
+  section of `homeserver.yaml`. Then add the following to `pg_hba.conf`:
+
+  ```
+  host    synapse     synapse_user    ::1/128     md5  # or `scram-sha-256` instead of `md5` if you use that
+  ```
+
+* If the `synapse_user` user does not have a password, then a password doesn't
+  have to be added to `homeserver.yaml`. But the following does need to be added
+  to `pg_hba.conf`:
+
+  ```
+  host    synapse     synapse_user    ::1/128     trust
+  ```
+
+Note that line order matters in `pg_hba.conf`, so make sure that if you do add a
+new line, it is inserted before:
+
+```
+host    all         all             ::1/128     ident
+```
 
 ### Fixing incorrect `COLLATE` or `CTYPE`
 
@@ -72,8 +98,7 @@ underneath the database, or if a different version of the locale is used on any
 replicas.
 
 The safest way to fix the issue is to take a dump and recreate the database with
-the correct `COLLATE` and `CTYPE` parameters (as per
-[docs/postgres.md](docs/postgres.md)). It is also possible to change the
+the correct `COLLATE` and `CTYPE` parameters (as shown above). It is also possible to change the
 parameters on a live database and run a `REINDEX` on the entire database,
 however extreme care must be taken to avoid database corruption.
 
@@ -105,19 +130,41 @@ of free memory the database host has available.
 When you are ready to start using PostgreSQL, edit the `database`
 section in your config file to match the following lines:
 
-    database:
-        name: psycopg2
-        args:
-            user: <user>
-            password: <pass>
-            database: <db>
-            host: <host>
-            cp_min: 5
-            cp_max: 10
+```yaml
+database:
+  name: psycopg2
+  args:
+    user: <user>
+    password: <pass>
+    database: <db>
+    host: <host>
+    cp_min: 5
+    cp_max: 10
+```
 
 All key, values in `args` are passed to the `psycopg2.connect(..)`
 function, except keys beginning with `cp_`, which are consumed by the
-twisted adbapi connection pool.
+twisted adbapi connection pool. See the [libpq
+documentation](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS)
+for a list of options which can be passed.
+
+You should consider tuning the `args.keepalives_*` options if there is any danger of
+the connection between your homeserver and database dropping, otherwise Synapse
+may block for an extended period while it waits for a response from the
+database server. Example values might be:
+
+```yaml
+# seconds of inactivity after which TCP should send a keepalive message to the server
+keepalives_idle: 10
+
+# the number of seconds after which a TCP keepalive message that is not
+# acknowledged by the server should be retransmitted
+keepalives_interval: 10
+
+# the number of TCP keepalives that can be lost before the client's connection
+# to the server is considered dead
+keepalives_count: 3
+```
 
 ## Porting from SQLite
 
diff --git a/docs/reverse_proxy.md b/docs/reverse_proxy.md
index af6d73927a..cbb8269568 100644
--- a/docs/reverse_proxy.md
+++ b/docs/reverse_proxy.md
@@ -9,7 +9,7 @@ of doing so is that it means that you can expose the default https port
 (443) to Matrix clients without needing to run Synapse with root
 privileges.
 
-> **NOTE**: Your reverse proxy must not `canonicalise` or `normalise`
+**NOTE**: Your reverse proxy must not `canonicalise` or `normalise`
 the requested URI in any way (for example, by decoding `%xx` escapes).
 Beware that Apache *will* canonicalise URIs unless you specifify
 `nocanon`.
@@ -18,7 +18,7 @@ When setting up a reverse proxy, remember that Matrix clients and other
 Matrix servers do not necessarily need to connect to your server via the
 same server name or port. Indeed, clients will use port 443 by default,
 whereas servers default to port 8448. Where these are different, we
-refer to the 'client port' and the \'federation port\'. See [the Matrix
+refer to the 'client port' and the 'federation port'. See [the Matrix
 specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names)
 for more details of the algorithm used for federation connections, and
 [delegate.md](<delegate.md>) for instructions on setting up delegation.
@@ -28,90 +28,113 @@ Let's assume that we expect clients to connect to our server at
 `https://example.com:8448`.  The following sections detail the configuration of
 the reverse proxy and the homeserver.
 
-## Webserver configuration examples
+## Reverse-proxy configuration examples
 
-> **NOTE**: You only need one of these.
+**NOTE**: You only need one of these.
 
 ### nginx
 
-        server {
-            listen 443 ssl;
-            listen [::]:443 ssl;
-            server_name matrix.example.com;
-
-            location /_matrix {
-                proxy_pass http://localhost:8008;
-                proxy_set_header X-Forwarded-For $remote_addr;
-            }
-        }
-
-        server {
-            listen 8448 ssl default_server;
-            listen [::]:8448 ssl default_server;
-            server_name example.com;
-
-            location / {
-                proxy_pass http://localhost:8008;
-                proxy_set_header X-Forwarded-For $remote_addr;
-            }
-        }
-
-> **NOTE**: Do not add a `/` after the port in `proxy_pass`, otherwise nginx will
+```
+server {
+    listen 443 ssl;
+    listen [::]:443 ssl;
+    server_name matrix.example.com;
+
+    location /_matrix {
+        proxy_pass http://localhost:8008;
+        proxy_set_header X-Forwarded-For $remote_addr;
+        # Nginx by default only allows file uploads up to 1M in size
+        # Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
+        client_max_body_size 10M;
+    }
+}
+
+server {
+    listen 8448 ssl default_server;
+    listen [::]:8448 ssl default_server;
+    server_name example.com;
+
+    location / {
+        proxy_pass http://localhost:8008;
+        proxy_set_header X-Forwarded-For $remote_addr;
+    }
+}
+```
+
+**NOTE**: Do not add a path after the port in `proxy_pass`, otherwise nginx will
 canonicalise/normalise the URI.
 
-### Caddy
+### Caddy 1
 
-        matrix.example.com {
-          proxy /_matrix http://localhost:8008 {
-            transparent
-          }
-        }
+```
+matrix.example.com {
+  proxy /_matrix http://localhost:8008 {
+    transparent
+  }
+}
 
-        example.com:8448 {
-          proxy / http://localhost:8008 {
-            transparent
-          }
-        }
+example.com:8448 {
+  proxy / http://localhost:8008 {
+    transparent
+  }
+}
+```
+
+### Caddy 2
+
+```
+matrix.example.com {
+  reverse_proxy /_matrix/* http://localhost:8008
+}
+
+example.com:8448 {
+  reverse_proxy http://localhost:8008
+}
+```
 
 ### Apache
 
-        <VirtualHost *:443>
-            SSLEngine on
-            ServerName matrix.example.com;
+```
+<VirtualHost *:443>
+    SSLEngine on
+    ServerName matrix.example.com;
 
-            AllowEncodedSlashes NoDecode
-            ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
-            ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
-        </VirtualHost>
+    AllowEncodedSlashes NoDecode
+    ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
+    ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
+</VirtualHost>
 
-        <VirtualHost *:8448>
-            SSLEngine on
-            ServerName example.com;
+<VirtualHost *:8448>
+    SSLEngine on
+    ServerName example.com;
 
-            AllowEncodedSlashes NoDecode
-            ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
-            ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
-        </VirtualHost>
+    AllowEncodedSlashes NoDecode
+    ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
+    ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
+</VirtualHost>
+```
 
-> **NOTE**: ensure the  `nocanon` options are included.
+**NOTE**: ensure the  `nocanon` options are included.
 
 ### HAProxy
 
-        frontend https
-          bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
+```
+frontend https
+  bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
 
-          # Matrix client traffic
-          acl matrix-host hdr(host) -i matrix.example.com
-          acl matrix-path path_beg /_matrix
+  # Matrix client traffic
+  acl matrix-host hdr(host) -i matrix.example.com
+  acl matrix-path path_beg /_matrix
 
-          use_backend matrix if matrix-host matrix-path
+  use_backend matrix if matrix-host matrix-path
 
-        frontend matrix-federation
-          bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
-          default_backend matrix
+frontend matrix-federation
+  bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
+  default_backend matrix
 
-        backend matrix
-          server matrix 127.0.0.1:8008
+backend matrix
+  server matrix 127.0.0.1:8008
+```
 
 ## Homeserver Configuration
 
diff --git a/docs/saml_mapping_providers.md b/docs/saml_mapping_providers.md
deleted file mode 100644
index 92f2380488..0000000000
--- a/docs/saml_mapping_providers.md
+++ /dev/null
@@ -1,77 +0,0 @@
-# SAML Mapping Providers
-
-A SAML mapping provider is a Python class (loaded via a Python module) that
-works out how to map attributes of a SAML response object to Matrix-specific
-user attributes. Details such as user ID localpart, displayname, and even avatar
-URLs are all things that can be mapped from talking to a SSO service.
-
-As an example, a SSO service may return the email address
-"john.smith@example.com" for a user, whereas Synapse will need to figure out how
-to turn that into a displayname when creating a Matrix user for this individual.
-It may choose `John Smith`, or `Smith, John [Example.com]` or any number of
-variations. As each Synapse configuration may want something different, this is
-where SAML mapping providers come into play.
-
-## Enabling Providers
-
-External mapping providers are provided to Synapse in the form of an external
-Python module. Retrieve this module from [PyPi](https://pypi.org) or elsewhere,
-then tell Synapse where to look for the handler class by editing the
-`saml2_config.user_mapping_provider.module` config option.
-
-`saml2_config.user_mapping_provider.config` allows you to provide custom
-configuration options to the module. Check with the module's documentation for
-what options it provides (if any). The options listed by default are for the
-user mapping provider built in to Synapse. If using a custom module, you should
-comment these options out and use those specified by the module instead.
-
-## Building a Custom Mapping Provider
-
-A custom mapping provider must specify the following methods:
-
-* `__init__(self, parsed_config)`
-   - Arguments:
-     - `parsed_config` - A configuration object that is the return value of the
-       `parse_config` method. You should set any configuration options needed by
-       the module here.
-* `saml_response_to_user_attributes(self, saml_response, failures)`
-    - Arguments:
-      - `saml_response` - A `saml2.response.AuthnResponse` object to extract user
-                          information from.
-      - `failures` - An `int` that represents the amount of times the returned
-                     mxid localpart mapping has failed.  This should be used
-                     to create a deduplicated mxid localpart which should be
-                     returned instead. For example, if this method returns
-                     `john.doe` as the value of `mxid_localpart` in the returned
-                     dict, and that is already taken on the homeserver, this
-                     method will be called again with the same parameters but
-                     with failures=1. The method should then return a different
-                     `mxid_localpart` value, such as `john.doe1`.
-    - This method must return a dictionary, which will then be used by Synapse
-      to build a new user. The following keys are allowed:
-       * `mxid_localpart` - Required. The mxid localpart of the new user.
-       * `displayname` - The displayname of the new user. If not provided, will default to
-                         the value of `mxid_localpart`.
-* `parse_config(config)`
-    - This method should have the `@staticmethod` decoration.
-    - Arguments:
-        - `config` - A `dict` representing the parsed content of the
-          `saml2_config.user_mapping_provider.config` homeserver config option.
-           Runs on homeserver startup. Providers should extract any option values
-           they need here.
-    - Whatever is returned will be passed back to the user mapping provider module's
-      `__init__` method during construction.
-* `get_saml_attributes(config)`
-    - This method should have the `@staticmethod` decoration.
-    - Arguments:
-        - `config` - A object resulting from a call to `parse_config`.
-    - Returns a tuple of two sets. The first set equates to the saml auth
-      response attributes that are required for the module to function, whereas
-      the second set consists of those attributes which can be used if available,
-      but are not necessary.
-
-## Synapse's Default Provider
-
-Synapse has a built-in SAML mapping provider if a custom provider isn't
-specified in the config. It is located at
-[`synapse.handlers.saml_handler.DefaultSamlMappingProvider`](../synapse/handlers/saml_handler.py).
diff --git a/docs/sample_config.yaml b/docs/sample_config.yaml
index bf1ec4ece9..a60b15fa4e 100644
--- a/docs/sample_config.yaml
+++ b/docs/sample_config.yaml
@@ -33,10 +33,15 @@ server_name: "SERVERNAME"
 #
 pid_file: DATADIR/homeserver.pid
 
-# The path to the web client which will be served at /_matrix/client/
-# if 'webclient' is configured under the 'listeners' configuration.
+# The absolute URL to the web client which /_matrix/client will redirect
+# to if 'webclient' is configured under the 'listeners' configuration.
 #
-#web_client_location: "/path/to/web/root"
+# This option can be also set to the filesystem path to the web client
+# which will be served at /_matrix/client/ if 'webclient' is configured
+# under the 'listeners' configuration, however this is a security risk:
+# https://github.com/matrix-org/synapse#security-note
+#
+#web_client_location: https://riot.example.com/
 
 # The public-facing base URL that clients use to access this HS
 # (not including _matrix/...). This is the same URL a user would
@@ -248,6 +253,18 @@ listeners:
   #  bind_addresses: ['::1', '127.0.0.1']
   #  type: manhole
 
+# Forward extremities can build up in a room due to networking delays between
+# homeservers. Once this happens in a large room, calculation of the state of
+# that room can become quite expensive. To mitigate this, once the number of
+# forward extremities reaches a given threshold, Synapse will send an
+# org.matrix.dummy_event event, which will reduce the forward extremities
+# in the room.
+#
+# This setting defines the threshold (i.e. number of forward extremities in the
+# room) at which dummy events are sent. The default value is 10.
+#
+#dummy_events_threshold: 5
+
 
 ## Homeserver blocking ##
 
@@ -305,22 +322,27 @@ listeners:
 # Used by phonehome stats to group together related servers.
 #server_context: context
 
-# Resource-constrained homeserver Settings
+# Resource-constrained homeserver settings
 #
-# If limit_remote_rooms.enabled is True, the room complexity will be
-# checked before a user joins a new remote room. If it is above
-# limit_remote_rooms.complexity, it will disallow joining or
-# instantly leave.
+# When this is enabled, the room "complexity" will be checked before a user
+# joins a new remote room. If it is above the complexity limit, the server will
+# disallow joining, or will instantly leave.
 #
-# limit_remote_rooms.complexity_error can be set to customise the text
-# displayed to the user when a room above the complexity threshold has
-# its join cancelled.
+# Room complexity is an arbitrary measure based on factors such as the number of
+# users in the room.
 #
-# Uncomment the below lines to enable:
-#limit_remote_rooms:
-#  enabled: true
-#  complexity: 1.0
-#  complexity_error: "This room is too complex."
+limit_remote_rooms:
+  # Uncomment to enable room complexity checking.
+  #
+  #enabled: true
+
+  # the limit above which rooms cannot be joined. The default is 1.0.
+  #
+  #complexity: 0.5
+
+  # override the error which is returned when the room is too complex.
+  #
+  #complexity_error: "This room is too complex."
 
 # Whether to require a user to be in the room to add an alias to it.
 # Defaults to 'true'.
@@ -409,6 +431,16 @@ retention:
   #    longest_max_lifetime: 1y
   #    interval: 1d
 
+# Inhibits the /requestToken endpoints from returning an error that might leak
+# information about whether an e-mail address is in use or not on this
+# homeserver.
+# Note that for some endpoints the error situation is the e-mail already being
+# used, and for others the error is entering the e-mail being unused.
+# If this option is enabled, instead of returning an error, these endpoints will
+# act as if no error happened and return a fake session ID ('sid') to clients.
+#
+#request_token_inhibit_3pid_errors: true
+
 
 ## TLS ##
 
@@ -576,20 +608,94 @@ acme:
 
 
 
-## Database ##
+## Caching ##
 
-database:
-  # The database engine name
-  name: "sqlite3"
-  # Arguments to pass to the engine
-  args:
-    # Path to the database
-    database: "DATADIR/homeserver.db"
+# Caching can be configured through the following options.
+#
+# A cache 'factor' is a multiplier that can be applied to each of
+# Synapse's caches in order to increase or decrease the maximum
+# number of entries that can be stored.
 
-# Number of events to cache in memory.
+# The number of events to cache in memory. Not affected by
+# caches.global_factor.
 #
 #event_cache_size: 10K
 
+caches:
+   # Controls the global cache factor, which is the default cache factor
+   # for all caches if a specific factor for that cache is not otherwise
+   # set.
+   #
+   # This can also be set by the "SYNAPSE_CACHE_FACTOR" environment
+   # variable. Setting by environment variable takes priority over
+   # setting through the config file.
+   #
+   # Defaults to 0.5, which will half the size of all caches.
+   #
+   #global_factor: 1.0
+
+   # A dictionary of cache name to cache factor for that individual
+   # cache. Overrides the global cache factor for a given cache.
+   #
+   # These can also be set through environment variables comprised
+   # of "SYNAPSE_CACHE_FACTOR_" + the name of the cache in capital
+   # letters and underscores. Setting by environment variable
+   # takes priority over setting through the config file.
+   # Ex. SYNAPSE_CACHE_FACTOR_GET_USERS_WHO_SHARE_ROOM_WITH_USER=2.0
+   #
+   # Some caches have '*' and other characters that are not
+   # alphanumeric or underscores. These caches can be named with or
+   # without the special characters stripped. For example, to specify
+   # the cache factor for `*stateGroupCache*` via an environment
+   # variable would be `SYNAPSE_CACHE_FACTOR_STATEGROUPCACHE=2.0`.
+   #
+   per_cache_factors:
+     #get_users_who_share_room_with_user: 2.0
+
+
+## Database ##
+
+# The 'database' setting defines the database that synapse uses to store all of
+# its data.
+#
+# 'name' gives the database engine to use: either 'sqlite3' (for SQLite) or
+# 'psycopg2' (for PostgreSQL).
+#
+# 'args' gives options which are passed through to the database engine,
+# except for options starting 'cp_', which are used to configure the Twisted
+# connection pool. For a reference to valid arguments, see:
+#   * for sqlite: https://docs.python.org/3/library/sqlite3.html#sqlite3.connect
+#   * for postgres: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS
+#   * for the connection pool: https://twistedmatrix.com/documents/current/api/twisted.enterprise.adbapi.ConnectionPool.html#__init__
+#
+#
+# Example SQLite configuration:
+#
+#database:
+#  name: sqlite3
+#  args:
+#    database: /path/to/homeserver.db
+#
+#
+# Example Postgres configuration:
+#
+#database:
+#  name: psycopg2
+#  args:
+#    user: synapse
+#    password: secretpassword
+#    database: synapse
+#    host: localhost
+#    cp_min: 5
+#    cp_max: 10
+#
+# For more information on using Synapse with Postgres, see `docs/postgres.md`.
+#
+database:
+  name: sqlite3
+  args:
+    database: DATADIR/homeserver.db
+
 
 ## Logging ##
 
@@ -697,12 +803,11 @@ media_store_path: "DATADIR/media_store"
 #
 #media_storage_providers:
 #  - module: file_system
-#    # Whether to write new local files.
+#    # Whether to store newly uploaded local files
 #    store_local: false
-#    # Whether to write new remote media
+#    # Whether to store newly downloaded remote files
 #    store_remote: false
-#    # Whether to block upload requests waiting for write to this
-#    # provider to complete
+#    # Whether to wait for successful storage for local uploads
 #    store_synchronous: false
 #    config:
 #       directory: /mnt/some/other/directory
@@ -821,31 +926,55 @@ media_store_path: "DATADIR/media_store"
 #
 #max_spider_size: 10M
 
+# A list of values for the Accept-Language HTTP header used when
+# downloading webpages during URL preview generation. This allows
+# Synapse to specify the preferred languages that URL previews should
+# be in when communicating with remote servers.
+#
+# Each value is a IETF language tag; a 2-3 letter identifier for a
+# language, optionally followed by subtags separated by '-', specifying
+# a country or region variant.
+#
+# Multiple values can be provided, and a weight can be added to each by
+# using quality value syntax (;q=). '*' translates to any language.
+#
+# Defaults to "en".
+#
+# Example:
+#
+# url_preview_accept_language:
+#   - en-UK
+#   - en-US;q=0.9
+#   - fr;q=0.8
+#   - *;q=0.7
+#
+url_preview_accept_language:
+#   - en
+
 
 ## Captcha ##
-# See docs/CAPTCHA_SETUP for full details of configuring this.
+# See docs/CAPTCHA_SETUP.md for full details of configuring this.
 
-# This homeserver's ReCAPTCHA public key.
+# This homeserver's ReCAPTCHA public key. Must be specified if
+# enable_registration_captcha is enabled.
 #
 #recaptcha_public_key: "YOUR_PUBLIC_KEY"
 
-# This homeserver's ReCAPTCHA private key.
+# This homeserver's ReCAPTCHA private key. Must be specified if
+# enable_registration_captcha is enabled.
 #
 #recaptcha_private_key: "YOUR_PRIVATE_KEY"
 
-# Enables ReCaptcha checks when registering, preventing signup
+# Uncomment to enable ReCaptcha checks when registering, preventing signup
 # unless a captcha is answered. Requires a valid ReCaptcha
-# public/private key.
+# public/private key. Defaults to 'false'.
 #
-#enable_registration_captcha: false
-
-# A secret key used to bypass the captcha test entirely.
-#
-#captcha_bypass_secret: "YOUR_SECRET_HERE"
+#enable_registration_captcha: true
 
 # The API endpoint to use for verifying m.login.recaptcha responses.
+# Defaults to "https://www.recaptcha.net/recaptcha/api/siteverify".
 #
-#recaptcha_siteverify_api: "https://www.recaptcha.net/recaptcha/api/siteverify"
+#recaptcha_siteverify_api: "https://my.recaptcha.site"
 
 
 ## TURN ##
@@ -994,7 +1123,7 @@ account_validity:
 # If set, allows registration of standard or admin accounts by anyone who
 # has the shared secret, even if registration is otherwise disabled.
 #
-# registration_shared_secret: <PRIVATE STRING>
+#registration_shared_secret: <PRIVATE STRING>
 
 # Set the number of bcrypt rounds used to generate password hash.
 # Larger numbers increase the work factor needed to generate the hash.
@@ -1062,6 +1191,29 @@ account_threepid_delegates:
     #email: https://example.com     # Delegate email sending to example.com
     #msisdn: http://localhost:8090  # Delegate SMS sending to this local process
 
+# Whether users are allowed to change their displayname after it has
+# been initially set. Useful when provisioning users based on the
+# contents of a third-party directory.
+#
+# Does not apply to server administrators. Defaults to 'true'
+#
+#enable_set_displayname: false
+
+# Whether users are allowed to change their avatar after it has been
+# initially set. Useful when provisioning users based on the contents
+# of a third-party directory.
+#
+# Does not apply to server administrators. Defaults to 'true'
+#
+#enable_set_avatar_url: false
+
+# Whether users can change the 3PIDs associated with their accounts
+# (email address and msisdn).
+#
+# Defaults to 'true'
+#
+#enable_3pid_changes: false
+
 # Users who register on this homeserver will automatically be joined
 # to these rooms
 #
@@ -1076,6 +1228,13 @@ account_threepid_delegates:
 #
 #autocreate_auto_join_rooms: true
 
+# When auto_join_rooms is specified, setting this flag to false prevents
+# guest accounts from being automatically joined to the rooms.
+#
+# Defaults to true.
+#
+#auto_join_rooms_for_guests: false
+
 
 ## Metrics ###
 
@@ -1097,14 +1256,15 @@ account_threepid_delegates:
 # enabled by default, either for performance reasons or limited use.
 #
 metrics_flags:
-    # Publish synapse_federation_known_servers, a g auge of the number of
+    # Publish synapse_federation_known_servers, a gauge of the number of
     # servers this homeserver knows about, including itself. May cause
     # performance problems on large homeservers.
     #
     #known_servers: true
 
 # Whether or not to report anonymized homeserver usage statistics.
-# report_stats: true|false
+#
+#report_stats: true|false
 
 # The endpoint to report the anonymized homeserver usage statistics to.
 # Defaults to https://matrix.org/report-usage-stats/push
@@ -1140,13 +1300,13 @@ metrics_flags:
 # the registration_shared_secret is used, if one is given; otherwise,
 # a secret key is derived from the signing key.
 #
-# macaroon_secret_key: <PRIVATE STRING>
+#macaroon_secret_key: <PRIVATE STRING>
 
 # a secret which is used to calculate HMACs for form values, to stop
 # falsification of values. Must be specified for the User Consent
 # forms to work.
 #
-# form_secret: <PRIVATE STRING>
+#form_secret: <PRIVATE STRING>
 
 ## Signing Keys ##
 
@@ -1231,6 +1391,8 @@ trusted_key_servers:
 #key_server_signing_keys_path: "key_server_signing_keys.key"
 
 
+## Single sign-on integration ##
+
 # Enable SAML2 for registration and login. Uses pysaml2.
 #
 # At least one of `sp_config` or `config_path` must be set in this section to
@@ -1263,32 +1425,32 @@ saml2_config:
   #    remote:
   #      - url: https://our_idp/metadata.xml
   #
-  #    # By default, the user has to go to our login page first. If you'd like
-  #    # to allow IdP-initiated login, set 'allow_unsolicited: true' in a
-  #    # 'service.sp' section:
-  #    #
-  #    #service:
-  #    #  sp:
-  #    #    allow_unsolicited: true
-  #
-  #    # The examples below are just used to generate our metadata xml, and you
-  #    # may well not need them, depending on your setup. Alternatively you
-  #    # may need a whole lot more detail - see the pysaml2 docs!
-  #
-  #    description: ["My awesome SP", "en"]
-  #    name: ["Test SP", "en"]
-  #
-  #    organization:
-  #      name: Example com
-  #      display_name:
-  #        - ["Example co", "en"]
-  #      url: "http://example.com"
-  #
-  #    contact_person:
-  #      - given_name: Bob
-  #        sur_name: "the Sysadmin"
-  #        email_address": ["admin@example.com"]
-  #        contact_type": technical
+  #  # By default, the user has to go to our login page first. If you'd like
+  #  # to allow IdP-initiated login, set 'allow_unsolicited: true' in a
+  #  # 'service.sp' section:
+  #  #
+  #  #service:
+  #  #  sp:
+  #  #    allow_unsolicited: true
+  #
+  #  # The examples below are just used to generate our metadata xml, and you
+  #  # may well not need them, depending on your setup. Alternatively you
+  #  # may need a whole lot more detail - see the pysaml2 docs!
+  #
+  #  description: ["My awesome SP", "en"]
+  #  name: ["Test SP", "en"]
+  #
+  #  organization:
+  #    name: Example com
+  #    display_name:
+  #      - ["Example co", "en"]
+  #    url: "http://example.com"
+  #
+  #  contact_person:
+  #    - given_name: Bob
+  #      sur_name: "the Sysadmin"
+  #      email_address": ["admin@example.com"]
+  #      contact_type": technical
 
   # Instead of putting the config inline as above, you can specify a
   # separate pysaml2 configuration file:
@@ -1364,7 +1526,13 @@ saml2_config:
   # * HTML page to display to users if something goes wrong during the
   #   authentication process: 'saml_error.html'.
   #
-  #   This template doesn't currently need any variable to render.
+  #   When rendering, this template is given the following variables:
+  #     * code: an HTML error code corresponding to the error that is being
+  #       returned (typically 400 or 500)
+  #
+  #     * msg: a textual message describing the error.
+  #
+  #   The variables will automatically be HTML-escaped.
   #
   # You can see the default templates at:
   # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
@@ -1372,6 +1540,121 @@ saml2_config:
   #template_dir: "res/templates"
 
 
+# OpenID Connect integration. The following settings can be used to make Synapse
+# use an OpenID Connect Provider for authentication, instead of its internal
+# password database.
+#
+# See https://github.com/matrix-org/synapse/blob/master/openid.md.
+#
+oidc_config:
+  # Uncomment the following to enable authorization against an OpenID Connect
+  # server. Defaults to false.
+  #
+  #enabled: true
+
+  # Uncomment the following to disable use of the OIDC discovery mechanism to
+  # discover endpoints. Defaults to true.
+  #
+  #discover: false
+
+  # the OIDC issuer. Used to validate tokens and (if discovery is enabled) to
+  # discover the provider's endpoints.
+  #
+  # Required if 'enabled' is true.
+  #
+  #issuer: "https://accounts.example.com/"
+
+  # oauth2 client id to use.
+  #
+  # Required if 'enabled' is true.
+  #
+  #client_id: "provided-by-your-issuer"
+
+  # oauth2 client secret to use.
+  #
+  # Required if 'enabled' is true.
+  #
+  #client_secret: "provided-by-your-issuer"
+
+  # auth method to use when exchanging the token.
+  # Valid values are 'client_secret_basic' (default), 'client_secret_post' and
+  # 'none'.
+  #
+  #client_auth_method: client_secret_post
+
+  # list of scopes to request. This should normally include the "openid" scope.
+  # Defaults to ["openid"].
+  #
+  #scopes: ["openid", "profile"]
+
+  # the oauth2 authorization endpoint. Required if provider discovery is disabled.
+  #
+  #authorization_endpoint: "https://accounts.example.com/oauth2/auth"
+
+  # the oauth2 token endpoint. Required if provider discovery is disabled.
+  #
+  #token_endpoint: "https://accounts.example.com/oauth2/token"
+
+  # the OIDC userinfo endpoint. Required if discovery is disabled and the
+  # "openid" scope is not requested.
+  #
+  #userinfo_endpoint: "https://accounts.example.com/userinfo"
+
+  # URI where to fetch the JWKS. Required if discovery is disabled and the
+  # "openid" scope is used.
+  #
+  #jwks_uri: "https://accounts.example.com/.well-known/jwks.json"
+
+  # Uncomment to skip metadata verification. Defaults to false.
+  #
+  # Use this if you are connecting to a provider that is not OpenID Connect
+  # compliant.
+  # Avoid this in production.
+  #
+  #skip_verification: true
+
+  # An external module can be provided here as a custom solution to mapping
+  # attributes returned from a OIDC provider onto a matrix user.
+  #
+  user_mapping_provider:
+    # The custom module's class. Uncomment to use a custom module.
+    # Default is 'synapse.handlers.oidc_handler.JinjaOidcMappingProvider'.
+    #
+    # See https://github.com/matrix-org/synapse/blob/master/docs/sso_mapping_providers.md#openid-mapping-providers
+    # for information on implementing a custom mapping provider.
+    #
+    #module: mapping_provider.OidcMappingProvider
+
+    # Custom configuration values for the module. This section will be passed as
+    # a Python dictionary to the user mapping provider module's `parse_config`
+    # method.
+    #
+    # The examples below are intended for the default provider: they should be
+    # changed if using a custom provider.
+    #
+    config:
+      # name of the claim containing a unique identifier for the user.
+      # Defaults to `sub`, which OpenID Connect compliant providers should provide.
+      #
+      #subject_claim: "sub"
+
+      # Jinja2 template for the localpart of the MXID.
+      #
+      # When rendering, this template is given the following variables:
+      #   * user: The claims returned by the UserInfo Endpoint and/or in the ID
+      #     Token
+      #
+      # This must be configured if using the default mapping provider.
+      #
+      localpart_template: "{{ user.preferred_username }}"
+
+      # Jinja2 template for the display name to set on first login.
+      #
+      # If unset, no displayname will be set.
+      #
+      #display_name_template: "{{ user.given_name }} {{ user.last_name }}"
+
+
 
 # Enable CAS for registration and login.
 #
@@ -1384,7 +1667,8 @@ saml2_config:
 #   #    name: value
 
 
-# Additional settings to use with single-sign on systems such as SAML2 and CAS.
+# Additional settings to use with single-sign on systems such as OpenID Connect,
+# SAML2 and CAS.
 #
 sso:
     # A list of client URLs which are whitelisted so that the user does not
@@ -1397,6 +1681,10 @@ sso:
     # phishing attacks from evil.site. To avoid this, include a slash after the
     # hostname: "https://my.client/".
     #
+    # If public_baseurl is set, then the login fallback page (used by clients
+    # that don't natively support the required login flows) is whitelisted in
+    # addition to any URLs in this list.
+    #
     # By default, this list is empty.
     #
     #client_whitelist:
@@ -1428,6 +1716,37 @@ sso:
     #
     #     * server_name: the homeserver's name.
     #
+    # * HTML page which notifies the user that they are authenticating to confirm
+    #   an operation on their account during the user interactive authentication
+    #   process: 'sso_auth_confirm.html'.
+    #
+    #   When rendering, this template is given the following variables:
+    #     * redirect_url: the URL the user is about to be redirected to. Needs
+    #                     manual escaping (see
+    #                     https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
+    #
+    #     * description: the operation which the user is being asked to confirm
+    #
+    # * HTML page shown after a successful user interactive authentication session:
+    #   'sso_auth_success.html'.
+    #
+    #   Note that this page must include the JavaScript which notifies of a successful authentication
+    #   (see https://matrix.org/docs/spec/client_server/r0.6.0#fallback).
+    #
+    #   This template has no additional variables.
+    #
+    # * HTML page shown during single sign-on if a deactivated user (according to Synapse's database)
+    #   attempts to login: 'sso_account_deactivated.html'.
+    #
+    #   This template has no additional variables.
+    #
+    # * HTML page to display to users if something goes wrong during the
+    #   OpenID Connect authentication process: 'sso_error.html'.
+    #
+    #   When rendering, this template is given two variables:
+    #     * error: the technical name of the error
+    #     * error_description: a human-readable message for the error
+    #
     # You can see the default templates at:
     # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
     #
@@ -1458,6 +1777,41 @@ password_config:
    #
    #pepper: "EVEN_MORE_SECRET"
 
+   # Define and enforce a password policy. Each parameter is optional.
+   # This is an implementation of MSC2000.
+   #
+   policy:
+      # Whether to enforce the password policy.
+      # Defaults to 'false'.
+      #
+      #enabled: true
+
+      # Minimum accepted length for a password.
+      # Defaults to 0.
+      #
+      #minimum_length: 15
+
+      # Whether a password must contain at least one digit.
+      # Defaults to 'false'.
+      #
+      #require_digit: true
+
+      # Whether a password must contain at least one symbol.
+      # A symbol is any character that's not a number or a letter.
+      # Defaults to 'false'.
+      #
+      #require_symbol: true
+
+      # Whether a password must contain at least one lowercase letter.
+      # Defaults to 'false'.
+      #
+      #require_lowercase: true
+
+      # Whether a password must contain at least one lowercase letter.
+      # Defaults to 'false'.
+      #
+      #require_uppercase: true
+
 
 # Configuration for sending emails from Synapse.
 #
@@ -1473,8 +1827,8 @@ email:
   # Username/password for authentication to the SMTP server. By default, no
   # authentication is attempted.
   #
-  # smtp_user: "exampleusername"
-  # smtp_pass: "examplepassword"
+  #smtp_user: "exampleusername"
+  #smtp_pass: "examplepassword"
 
   # Uncomment the following to require TLS transport security for SMTP.
   # By default, Synapse will connect over plain text, and will then switch to
@@ -1566,7 +1920,19 @@ email:
   #template_dir: "res/templates"
 
 
-#password_providers:
+# Password providers allow homeserver administrators to integrate
+# their Synapse installation with existing authentication methods
+# ex. LDAP, external tokens, etc.
+#
+# For more information and known implementations, please see
+# https://github.com/matrix-org/synapse/blob/master/docs/password_auth_providers.md
+#
+# Note: instances wishing to use SAML or CAS authentication should
+# instead use the `saml2_config` or `cas_config` options,
+# respectively.
+#
+password_providers:
+#    # Example config for an LDAP auth provider
 #    - module: "ldap_auth_provider.LdapAuthProvider"
 #      config:
 #        enabled: true
@@ -1599,10 +1965,17 @@ email:
 #  include_content: true
 
 
-#spam_checker:
-#  module: "my_custom_project.SuperSpamChecker"
-#  config:
-#    example_option: 'things'
+# Spam checkers are third-party modules that can block specific actions
+# of local users, such as creating rooms and registering undesirable
+# usernames, as well as remote users by redacting incoming events.
+#
+spam_checker:
+   #- module: "my_custom_project.SuperSpamChecker"
+   #  config:
+   #    example_option: 'things'
+   #- module: "some_other_project.BadEventStopper"
+   #  config:
+   #    example_stop_events_from: ['@bad:example.com']
 
 
 # Uncomment to allow non-server-admin users to create groups on this server
diff --git a/docs/spam_checker.md b/docs/spam_checker.md
index 5b5f5000b7..eb10e115f9 100644
--- a/docs/spam_checker.md
+++ b/docs/spam_checker.md
@@ -64,10 +64,12 @@ class ExampleSpamChecker:
 Modify the `spam_checker` section of your `homeserver.yaml` in the following
 manner:
 
-`module` should point to the fully qualified Python class that implements your
-custom logic, e.g. `my_module.ExampleSpamChecker`.
+Create a list entry with the keys `module` and `config`.
 
-`config` is a dictionary that gets passed to the spam checker class.
+* `module` should point to the fully qualified Python class that implements your
+  custom logic, e.g. `my_module.ExampleSpamChecker`.
+
+* `config` is a dictionary that gets passed to the spam checker class.
 
 ### Example
 
@@ -75,12 +77,15 @@ This section might look like:
 
 ```yaml
 spam_checker:
-  module: my_module.ExampleSpamChecker
-  config:
-    # Enable or disable a specific option in ExampleSpamChecker.
-    my_custom_option: true
+  - module: my_module.ExampleSpamChecker
+    config:
+      # Enable or disable a specific option in ExampleSpamChecker.
+      my_custom_option: true
 ```
 
+More spam checkers can be added in tandem by appending more items to the list. An
+action is blocked when at least one of the configured spam checkers flags it.
+
 ## Examples
 
 The [Mjolnir](https://github.com/matrix-org/mjolnir) project is a full fledged
diff --git a/docs/sso_mapping_providers.md b/docs/sso_mapping_providers.md
new file mode 100644
index 0000000000..abea432343
--- /dev/null
+++ b/docs/sso_mapping_providers.md
@@ -0,0 +1,148 @@
+# SSO Mapping Providers
+
+A mapping provider is a Python class (loaded via a Python module) that
+works out how to map attributes of a SSO response to Matrix-specific
+user attributes. Details such as user ID localpart, displayname, and even avatar
+URLs are all things that can be mapped from talking to a SSO service.
+
+As an example, a SSO service may return the email address
+"john.smith@example.com" for a user, whereas Synapse will need to figure out how
+to turn that into a displayname when creating a Matrix user for this individual.
+It may choose `John Smith`, or `Smith, John [Example.com]` or any number of
+variations. As each Synapse configuration may want something different, this is
+where SAML mapping providers come into play.
+
+SSO mapping providers are currently supported for OpenID and SAML SSO
+configurations. Please see the details below for how to implement your own.
+
+External mapping providers are provided to Synapse in the form of an external
+Python module. You can retrieve this module from [PyPi](https://pypi.org) or elsewhere,
+but it must be importable via Synapse (e.g. it must be in the same virtualenv
+as Synapse). The Synapse config is then modified to point to the mapping provider
+(and optionally provide additional configuration for it).
+
+## OpenID Mapping Providers
+
+The OpenID mapping provider can be customized by editing the
+`oidc_config.user_mapping_provider.module` config option.
+
+`oidc_config.user_mapping_provider.config` allows you to provide custom
+configuration options to the module. Check with the module's documentation for
+what options it provides (if any). The options listed by default are for the
+user mapping provider built in to Synapse. If using a custom module, you should
+comment these options out and use those specified by the module instead.
+
+### Building a Custom OpenID Mapping Provider
+
+A custom mapping provider must specify the following methods:
+
+* `__init__(self, parsed_config)`
+   - Arguments:
+     - `parsed_config` - A configuration object that is the return value of the
+       `parse_config` method. You should set any configuration options needed by
+       the module here.
+* `parse_config(config)`
+    - This method should have the `@staticmethod` decoration.
+    - Arguments:
+        - `config` - A `dict` representing the parsed content of the
+          `oidc_config.user_mapping_provider.config` homeserver config option.
+           Runs on homeserver startup. Providers should extract and validate
+           any option values they need here.
+    - Whatever is returned will be passed back to the user mapping provider module's
+      `__init__` method during construction.
+* `get_remote_user_id(self, userinfo)`
+    - Arguments:
+      - `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
+                     information from.
+    - This method must return a string, which is the unique identifier for the
+      user. Commonly the ``sub`` claim of the response.
+* `map_user_attributes(self, userinfo, token)`
+    - This method should be async.
+    - Arguments:
+      - `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
+                     information from.
+      - `token` - A dictionary which includes information necessary to make
+                  further requests to the OpenID provider.
+    - Returns a dictionary with two keys:
+      - localpart: A required string, used to generate the Matrix ID.
+      - displayname: An optional string, the display name for the user.
+
+### Default OpenID Mapping Provider
+
+Synapse has a built-in OpenID mapping provider if a custom provider isn't
+specified in the config. It is located at
+[`synapse.handlers.oidc_handler.JinjaOidcMappingProvider`](../synapse/handlers/oidc_handler.py).
+
+## SAML Mapping Providers
+
+The SAML mapping provider can be customized by editing the
+`saml2_config.user_mapping_provider.module` config option.
+
+`saml2_config.user_mapping_provider.config` allows you to provide custom
+configuration options to the module. Check with the module's documentation for
+what options it provides (if any). The options listed by default are for the
+user mapping provider built in to Synapse. If using a custom module, you should
+comment these options out and use those specified by the module instead.
+
+### Building a Custom SAML Mapping Provider
+
+A custom mapping provider must specify the following methods:
+
+* `__init__(self, parsed_config)`
+   - Arguments:
+     - `parsed_config` - A configuration object that is the return value of the
+       `parse_config` method. You should set any configuration options needed by
+       the module here.
+* `parse_config(config)`
+    - This method should have the `@staticmethod` decoration.
+    - Arguments:
+        - `config` - A `dict` representing the parsed content of the
+          `saml_config.user_mapping_provider.config` homeserver config option.
+           Runs on homeserver startup. Providers should extract and validate
+           any option values they need here.
+    - Whatever is returned will be passed back to the user mapping provider module's
+      `__init__` method during construction.
+* `get_saml_attributes(config)`
+    - This method should have the `@staticmethod` decoration.
+    - Arguments:
+        - `config` - A object resulting from a call to `parse_config`.
+    - Returns a tuple of two sets. The first set equates to the SAML auth
+      response attributes that are required for the module to function, whereas
+      the second set consists of those attributes which can be used if available,
+      but are not necessary.
+* `get_remote_user_id(self, saml_response, client_redirect_url)`
+    - Arguments:
+      - `saml_response` - A `saml2.response.AuthnResponse` object to extract user
+                          information from.
+      - `client_redirect_url` - A string, the URL that the client will be
+                                redirected to.
+    - This method must return a string, which is the unique identifier for the
+      user. Commonly the ``uid`` claim of the response.
+* `saml_response_to_user_attributes(self, saml_response, failures, client_redirect_url)`
+    - Arguments:
+      - `saml_response` - A `saml2.response.AuthnResponse` object to extract user
+                          information from.
+      - `failures` - An `int` that represents the amount of times the returned
+                     mxid localpart mapping has failed.  This should be used
+                     to create a deduplicated mxid localpart which should be
+                     returned instead. For example, if this method returns
+                     `john.doe` as the value of `mxid_localpart` in the returned
+                     dict, and that is already taken on the homeserver, this
+                     method will be called again with the same parameters but
+                     with failures=1. The method should then return a different
+                     `mxid_localpart` value, such as `john.doe1`.
+      - `client_redirect_url` - A string, the URL that the client will be
+                                redirected to.
+    - This method must return a dictionary, which will then be used by Synapse
+      to build a new user. The following keys are allowed:
+       * `mxid_localpart` - Required. The mxid localpart of the new user.
+       * `displayname` - The displayname of the new user. If not provided, will default to
+                         the value of `mxid_localpart`.
+       * `emails` - A list of emails for the new user. If not provided, will
+                    default to an empty list.
+
+### Default SAML Mapping Provider
+
+Synapse has a built-in SAML mapping provider if a custom provider isn't
+specified in the config. It is located at
+[`synapse.handlers.saml_handler.DefaultSamlMappingProvider`](../synapse/handlers/saml_handler.py).
diff --git a/docs/systemd-with-workers/README.md b/docs/systemd-with-workers/README.md
new file mode 100644
index 0000000000..257c09446f
--- /dev/null
+++ b/docs/systemd-with-workers/README.md
@@ -0,0 +1,67 @@
+# Setting up Synapse with Workers and Systemd
+
+This is a setup for managing synapse with systemd, including support for
+managing workers. It provides a `matrix-synapse` service for the master, as
+well as a `matrix-synapse-worker@` service template for any workers you
+require. Additionally, to group the required services, it sets up a
+`matrix-synapse.target`.
+
+See the folder [system](system) for the systemd unit files.
+
+The folder [workers](workers) contains an example configuration for the
+`federation_reader` worker.
+
+## Synapse configuration files
+
+See [workers.md](../workers.md) for information on how to set up the
+configuration files and reverse-proxy correctly. You can find an example worker
+config in the [workers](workers) folder.
+
+Systemd manages daemonization itself, so ensure that none of the configuration
+files set either `daemonize` or `worker_daemonize`.
+
+The config files of all workers are expected to be located in
+`/etc/matrix-synapse/workers`. If you want to use a different location, edit
+the provided `*.service` files accordingly.
+
+There is no need for a separate configuration file for the master process.
+
+## Set up
+
+1. Adjust synapse configuration files as above.
+1. Copy the `*.service` and `*.target` files in [system](system) to
+`/etc/systemd/system`.
+1. Run `systemctl deamon-reload` to tell systemd to load the new unit files.
+1. Run `systemctl enable matrix-synapse.service`. This will configure the
+synapse master process to be started as part of the `matrix-synapse.target`
+target.
+1. For each worker process to be enabled, run `systemctl enable
+matrix-synapse-worker@<worker_name>.service`. For each `<worker_name>`, there
+should be a corresponding configuration file
+`/etc/matrix-synapse/workers/<worker_name>.yaml`.
+1. Start all the synapse processes with `systemctl start matrix-synapse.target`.
+1. Tell systemd to start synapse on boot with `systemctl enable matrix-synapse.target`/
+
+## Usage
+
+Once the services are correctly set up, you can use the following commands
+to manage your synapse installation:
+
+```sh
+# Restart Synapse master and all workers
+systemctl restart matrix-synapse.target
+
+# Stop Synapse and all workers
+systemctl stop matrix-synapse.target
+
+# Restart the master alone
+systemctl start matrix-synapse.service
+
+# Restart a specific worker (eg. federation_reader); the master is
+# unaffected by this.
+systemctl restart matrix-synapse-worker@federation_reader.service
+
+# Add a new worker (assuming all configs are set up already)
+systemctl enable matrix-synapse-worker@federation_writer.service
+systemctl restart matrix-synapse.target
+```
diff --git a/docs/systemd-with-workers/system/matrix-synapse-worker@.service b/docs/systemd-with-workers/system/matrix-synapse-worker@.service
new file mode 100644
index 0000000000..39bc5e88e8
--- /dev/null
+++ b/docs/systemd-with-workers/system/matrix-synapse-worker@.service
@@ -0,0 +1,20 @@
+[Unit]
+Description=Synapse %i
+AssertPathExists=/etc/matrix-synapse/workers/%i.yaml
+# This service should be restarted when the synapse target is restarted.
+PartOf=matrix-synapse.target
+
+[Service]
+Type=notify
+NotifyAccess=main
+User=matrix-synapse
+WorkingDirectory=/var/lib/matrix-synapse
+EnvironmentFile=/etc/default/matrix-synapse
+ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.generic_worker --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --config-path=/etc/matrix-synapse/workers/%i.yaml
+ExecReload=/bin/kill -HUP $MAINPID
+Restart=always
+RestartSec=3
+SyslogIdentifier=matrix-synapse-%i
+
+[Install]
+WantedBy=matrix-synapse.target
diff --git a/docs/systemd-with-workers/system/matrix-synapse.service b/docs/systemd-with-workers/system/matrix-synapse.service
new file mode 100644
index 0000000000..c7b5ddfa49
--- /dev/null
+++ b/docs/systemd-with-workers/system/matrix-synapse.service
@@ -0,0 +1,21 @@
+[Unit]
+Description=Synapse master
+
+# This service should be restarted when the synapse target is restarted.
+PartOf=matrix-synapse.target
+
+[Service]
+Type=notify
+NotifyAccess=main
+User=matrix-synapse
+WorkingDirectory=/var/lib/matrix-synapse
+EnvironmentFile=/etc/default/matrix-synapse
+ExecStartPre=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --generate-keys
+ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/
+ExecReload=/bin/kill -HUP $MAINPID
+Restart=always
+RestartSec=3
+SyslogIdentifier=matrix-synapse
+
+[Install]
+WantedBy=matrix-synapse.target
diff --git a/docs/systemd-with-workers/system/matrix-synapse.target b/docs/systemd-with-workers/system/matrix-synapse.target
new file mode 100644
index 0000000000..e0eba1b342
--- /dev/null
+++ b/docs/systemd-with-workers/system/matrix-synapse.target
@@ -0,0 +1,6 @@
+[Unit]
+Description=Synapse parent target
+After=network.target
+
+[Install]
+WantedBy=multi-user.target
diff --git a/docs/systemd-with-workers/workers/federation_reader.yaml b/docs/systemd-with-workers/workers/federation_reader.yaml
new file mode 100644
index 0000000000..5b65c7040d
--- /dev/null
+++ b/docs/systemd-with-workers/workers/federation_reader.yaml
@@ -0,0 +1,13 @@
+worker_app: synapse.app.federation_reader
+
+worker_replication_host: 127.0.0.1
+worker_replication_port: 9092
+worker_replication_http_port: 9093
+
+worker_listeners:
+    - type: http
+      port: 8011
+      resources:
+          - names: [federation]
+
+worker_log_config: /etc/matrix-synapse/federation-reader-log.yaml
diff --git a/docs/tcp_replication.md b/docs/tcp_replication.md
index e3a4634b14..db318baa9d 100644
--- a/docs/tcp_replication.md
+++ b/docs/tcp_replication.md
@@ -14,16 +14,18 @@ example flow would be (where '>' indicates master to worker and
 '<' worker to master flows):
 
     > SERVER example.com
-    < REPLICATE events 53
-    > RDATA events 54 ["$foo1:bar.com", ...]
-    > RDATA events 55 ["$foo4:bar.com", ...]
-
-The example shows the server accepting a new connection and sending its
-identity with the `SERVER` command, followed by the client asking to
-subscribe to the `events` stream from the token `53`. The server then
-periodically sends `RDATA` commands which have the format
-`RDATA <stream_name> <token> <row>`, where the format of `<row>` is
-defined by the individual streams.
+    < REPLICATE
+    > POSITION events master 53
+    > RDATA events master 54 ["$foo1:bar.com", ...]
+    > RDATA events master 55 ["$foo4:bar.com", ...]
+
+The example shows the server accepting a new connection and sending its identity
+with the `SERVER` command, followed by the client server to respond with the
+position of all streams. The server then periodically sends `RDATA` commands
+which have the format `RDATA <stream_name> <instance_name> <token> <row>`, where
+the format of `<row>` is defined by the individual streams. The
+`<instance_name>` is the name of the Synapse process that generated the data
+(usually "master").
 
 Error reporting happens by either the client or server sending an ERROR
 command, and usually the connection will be closed.
@@ -32,9 +34,6 @@ Since the protocol is a simple line based, its possible to manually
 connect to the server using a tool like netcat. A few things should be
 noted when manually using the protocol:
 
--   When subscribing to a stream using `REPLICATE`, the special token
-    `NOW` can be used to get all future updates. The special stream name
-    `ALL` can be used with `NOW` to subscribe to all available streams.
 -   The federation stream is only available if federation sending has
     been disabled on the main process.
 -   The server will only time connections out that have sent a `PING`
@@ -55,7 +54,7 @@ The basic structure of the protocol is line based, where the initial
 word of each line specifies the command. The rest of the line is parsed
 based on the command. For example, the RDATA command is defined as:
 
-    RDATA <stream_name> <token> <row_json>
+    RDATA <stream_name> <instance_name> <token> <row_json>
 
 (Note that <row_json> may contains spaces, but cannot contain
 newlines.)
@@ -91,9 +90,7 @@ The client:
 -   Sends a `NAME` command, allowing the server to associate a human
     friendly name with the connection. This is optional.
 -   Sends a `PING` as above
--   For each stream the client wishes to subscribe to it sends a
-    `REPLICATE` with the `stream_name` and token it wants to subscribe
-    from.
+-   Sends a `REPLICATE` to get the current position of all streams.
 -   On receipt of a `SERVER` command, checks that the server name
     matches the expected server name.
 
@@ -140,14 +137,12 @@ the wire:
     > PING 1490197665618
     < NAME synapse.app.appservice
     < PING 1490197665618
-    < REPLICATE events 1
-    < REPLICATE backfill 1
-    < REPLICATE caches 1
-    > POSITION events 1
-    > POSITION backfill 1
-    > POSITION caches 1
-    > RDATA caches 2 ["get_user_by_id",["@01register-user:localhost:8823"],1490197670513]
-    > RDATA events 14 ["$149019767112vOHxz:localhost:8823",
+    < REPLICATE
+    > POSITION events master 1
+    > POSITION backfill master 1
+    > POSITION caches master 1
+    > RDATA caches master 2 ["get_user_by_id",["@01register-user:localhost:8823"],1490197670513]
+    > RDATA events master 14 ["$149019767112vOHxz:localhost:8823",
         "!AFDCvgApUmpdfVjIXm:localhost:8823","m.room.guest_access","",null]
     < PING 1490197675618
     > ERROR server stopping
@@ -158,10 +153,10 @@ position without needing to send data with the `RDATA` command.
 
 An example of a batched set of `RDATA` is:
 
-    > RDATA caches batch ["get_user_by_id",["@test:localhost:8823"],1490197670513]
-    > RDATA caches batch ["get_user_by_id",["@test2:localhost:8823"],1490197670513]
-    > RDATA caches batch ["get_user_by_id",["@test3:localhost:8823"],1490197670513]
-    > RDATA caches 54 ["get_user_by_id",["@test4:localhost:8823"],1490197670513]
+    > RDATA caches master batch ["get_user_by_id",["@test:localhost:8823"],1490197670513]
+    > RDATA caches master batch ["get_user_by_id",["@test2:localhost:8823"],1490197670513]
+    > RDATA caches master batch ["get_user_by_id",["@test3:localhost:8823"],1490197670513]
+    > RDATA caches master 54 ["get_user_by_id",["@test4:localhost:8823"],1490197670513]
 
 In this case the client shouldn't advance their caches token until it
 sees the the last `RDATA`.
@@ -181,9 +176,14 @@ client (C):
 
 #### POSITION (S)
 
-   The position of the stream has been updated. Sent to the client
-    after all missing updates for a stream have been sent to the client
-    and they're now up to date.
+   On receipt of a POSITION command clients should check if they have missed any
+   updates, and if so then fetch them out of band. Sent in response to a
+   REPLICATE command (but can happen at any time).
+
+   The POSITION command includes the source of the stream. Currently all streams
+   are written by a single process (usually "master"). If fetching missing
+   updates via HTTP API, rather than via the DB, then processes should make the
+   request to the appropriate process.
 
 #### ERROR (S, C)
 
@@ -199,24 +199,17 @@ client (C):
 
 #### REPLICATE (C)
 
-Asks the server to replicate a given stream. The syntax is:
+Asks the server for the current position of all streams.
 
-```
-    REPLICATE <stream_name> <token>
-```
+#### USER_SYNC (C)
 
-Where `<token>` may be either:
- * a numeric stream_id to stream updates since (exclusive)
- * `NOW` to stream all subsequent updates.
+   A user has started or stopped syncing on this process.
 
-The `<stream_name>` is the name of a replication stream to subscribe
-to (see [here](../synapse/replication/tcp/streams/_base.py) for a list
-of streams). It can also be `ALL` to subscribe to all known streams,
-in which case the `<token>` must be set to `NOW`.
+#### CLEAR_USER_SYNC (C)
 
-#### USER_SYNC (C)
+   The server should clear all associated user sync data from the worker.
 
-   A user has started or stopped syncing
+   This is used when a worker is shutting down.
 
 #### FEDERATION_ACK (C)
 
@@ -226,14 +219,6 @@ in which case the `<token>` must be set to `NOW`.
 
    Inform the server a pusher should be removed
 
-#### INVALIDATE_CACHE (C)
-
-   Inform the server a cache should be invalidated
-
-#### SYNC (S, C)
-
-   Used exclusively in tests
-
 ### REMOTE_SERVER_UP (S, C)
 
    Inform other processes that a remote server may have come back online.
@@ -252,12 +237,12 @@ Each individual cache invalidation results in a row being sent down
 replication, which includes the cache name (the name of the function)
 and they key to invalidate. For example:
 
-    > RDATA caches 550953771 ["get_user_by_id", ["@bob:example.com"], 1550574873251]
+    > RDATA caches master 550953771 ["get_user_by_id", ["@bob:example.com"], 1550574873251]
 
 Alternatively, an entire cache can be invalidated by sending down a `null`
 instead of the key. For example:
 
-    > RDATA caches 550953772 ["get_user_by_id", null, 1550574873252]
+    > RDATA caches master 550953772 ["get_user_by_id", null, 1550574873252]
 
 However, there are times when a number of caches need to be invalidated
 at the same time with the same key. To reduce traffic we batch those
diff --git a/docs/turn-howto.md b/docs/turn-howto.md
index 1bd3943f54..d4a726be66 100644
--- a/docs/turn-howto.md
+++ b/docs/turn-howto.md
@@ -11,7 +11,14 @@ TURN server.
 
 The following sections describe how to install [coturn](<https://github.com/coturn/coturn>) (which implements the TURN REST API) and integrate it with synapse.
 
-## `coturn` Setup
+## Requirements
+
+For TURN relaying with `coturn` to work, it must be hosted on a server/endpoint with a public IP.
+
+Hosting TURN behind a NAT (even with appropriate port forwarding) is known to cause issues
+and to often not work.
+
+## `coturn` setup
 
 ### Initial installation
 
@@ -19,7 +26,13 @@ The TURN daemon `coturn` is available from a variety of sources such as native p
 
 #### Debian installation
 
-    # apt install coturn
+Just install the debian package:
+
+```sh
+apt install coturn
+```
+
+This will install and start a systemd service called `coturn`.
 
 #### Source installation
 
@@ -56,38 +69,52 @@ The TURN daemon `coturn` is available from a variety of sources such as native p
 1.  Consider your security settings. TURN lets users request a relay which will
     connect to arbitrary IP addresses and ports. The following configuration is
     suggested as a minimum starting point:
-    
+
         # VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
         no-tcp-relay
-        
+
         # don't let the relay ever try to connect to private IP address ranges within your network (if any)
         # given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
         denied-peer-ip=10.0.0.0-10.255.255.255
         denied-peer-ip=192.168.0.0-192.168.255.255
         denied-peer-ip=172.16.0.0-172.31.255.255
-        
+
         # special case the turn server itself so that client->TURN->TURN->client flows work
         allowed-peer-ip=10.0.0.1
-        
+
         # consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
         user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
         total-quota=1200
 
-    Ideally coturn should refuse to relay traffic which isn't SRTP; see
-    <https://github.com/matrix-org/synapse/issues/2009>
+1.  Also consider supporting TLS/DTLS. To do this, add the following settings
+    to `turnserver.conf`:
+
+        # TLS certificates, including intermediate certs.
+        # For Let's Encrypt certificates, use `fullchain.pem` here.
+        cert=/path/to/fullchain.pem
+
+        # TLS private key file
+        pkey=/path/to/privkey.pem
 
 1.  Ensure your firewall allows traffic into the TURN server on the ports
-    you've configured it to listen on (remember to allow both TCP and UDP TURN
-    traffic)
+    you've configured it to listen on (By default: 3478 and 5349 for the TURN(s)
+    traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535
+    for the UDP relay.)
+
+1.  (Re)start the turn server:
 
-1.  If you've configured coturn to support TLS/DTLS, generate or import your
-    private key and certificate.
+    * If you used the Debian package (or have set up a systemd unit yourself):
+      ```sh
+      systemctl restart coturn
+      ```
 
-1.  Start the turn server:
+    * If you installed from source:
 
-         bin/turnserver -o
+      ```sh
+      bin/turnserver -o
+      ```
 
-## synapse Setup
+## Synapse setup
 
 Your home server configuration file needs the following extra keys:
 
@@ -113,13 +140,20 @@ Your home server configuration file needs the following extra keys:
 As an example, here is the relevant section of the config file for matrix.org:
 
     turn_uris: [ "turn:turn.matrix.org:3478?transport=udp", "turn:turn.matrix.org:3478?transport=tcp" ]
-    turn_shared_secret: n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons
+    turn_shared_secret: "n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons"
     turn_user_lifetime: 86400000
     turn_allow_guests: True
 
 After updating the homeserver configuration, you must restart synapse:
 
+  * If you use synctl:
+    ```sh
     cd /where/you/run/synapse
     ./synctl restart
+    ```
+  * If you use systemd:
+    ```
+    systemctl restart synapse.service
+    ```
 
 ..and your Home Server now supports VoIP relaying!
diff --git a/docs/workers.md b/docs/workers.md
index cb3b9f8e68..7512eff43a 100644
--- a/docs/workers.md
+++ b/docs/workers.md
@@ -1,23 +1,31 @@
 # Scaling synapse via workers
 
-Synapse has experimental support for splitting out functionality into
-multiple separate python processes, helping greatly with scalability.  These
+For small instances it recommended to run Synapse in monolith mode (the
+default). For larger instances where performance is a concern it can be helpful
+to split out functionality into multiple separate python processes. These
 processes are called 'workers', and are (eventually) intended to scale
 horizontally independently.
 
-All of the below is highly experimental and subject to change as Synapse evolves,
-but documenting it here to help folks needing highly scalable Synapses similar
-to the one running matrix.org!
+Synapse's worker support is under active development and subject to change as
+we attempt to rapidly scale ever larger Synapse instances. However we are
+documenting it here to help admins needing a highly scalable Synapse instance
+similar to the one running `matrix.org`.
 
-All processes continue to share the same database instance, and as such, workers
-only work with postgres based synapse deployments (sharing a single sqlite
-across multiple processes is a recipe for disaster, plus you should be using
-postgres anyway if you care about scalability).
+All processes continue to share the same database instance, and as such,
+workers only work with PostgreSQL-based Synapse deployments. SQLite should only
+be used for demo purposes and any admin considering workers should already be
+running PostgreSQL.
 
-The workers communicate with the master synapse process via a synapse-specific
-TCP protocol called 'replication' - analogous to MySQL or Postgres style
-database replication; feeding a stream of relevant data to the workers so they
-can be kept in sync with the main synapse process and database state.
+## Master/worker communication
+
+The workers communicate with the master process via a Synapse-specific protocol
+called 'replication' (analogous to MySQL- or Postgres-style database
+replication) which feeds a stream of relevant data from the master to the
+workers so they can be kept in sync with the master process and database state.
+
+Additionally, workers may make HTTP requests to the master, to send information
+in the other direction. Typically this is used for operations which need to
+wait for a reply - such as sending an event.
 
 ## Configuration
 
@@ -27,72 +35,61 @@ the correct worker, or to the main synapse instance. Note that this includes
 requests made to the federation port. See [reverse_proxy.md](reverse_proxy.md)
 for information on setting up a reverse proxy.
 
-To enable workers, you need to add two replication listeners to the master
-synapse, e.g.:
-
-    listeners:
-      # The TCP replication port
-      - port: 9092
-        bind_address: '127.0.0.1'
-        type: replication
-      # The HTTP replication port
-      - port: 9093
-        bind_address: '127.0.0.1'
-        type: http
-        resources:
-         - names: [replication]
-
-Under **no circumstances** should these replication API listeners be exposed to
-the public internet; it currently implements no authentication whatsoever and is
-unencrypted.
+To enable workers, you need to add *two* replication listeners to the
+main Synapse configuration file (`homeserver.yaml`). For example:
 
-(Roughly, the TCP port is used for streaming data from the master to the
-workers, and the HTTP port for the workers to send data to the main
-synapse process.)
-
-You then create a set of configs for the various worker processes.  These
-should be worker configuration files, and should be stored in a dedicated
-subdirectory, to allow synctl to manipulate them. An additional configuration
-for the master synapse process will need to be created because the process will
-not be started automatically. That configuration should look like this:
-
-    worker_app: synapse.app.homeserver
-    daemonize: true
+```yaml
+listeners:
+  # The TCP replication port
+  - port: 9092
+    bind_address: '127.0.0.1'
+    type: replication
+
+  # The HTTP replication port
+  - port: 9093
+    bind_address: '127.0.0.1'
+    type: http
+    resources:
+     - names: [replication]
+```
 
-Each worker configuration file inherits the configuration of the main homeserver
-configuration file.  You can then override configuration specific to that worker,
-e.g. the HTTP listener that it provides (if any); logging configuration; etc.
-You should minimise the number of overrides though to maintain a usable config.
+Under **no circumstances** should these replication API listeners be exposed to
+the public internet; they have no authentication and are unencrypted.
 
-You must specify the type of worker application (`worker_app`). The currently
-available worker applications are listed below. You must also specify the
-replication endpoints that it's talking to on the main synapse process.
-`worker_replication_host` should specify the host of the main synapse,
-`worker_replication_port` should point to the TCP replication listener port and
-`worker_replication_http_port` should point to the HTTP replication port.
+You should then create a set of configs for the various worker processes.  Each
+worker configuration file inherits the configuration of the main homeserver
+configuration file.  You can then override configuration specific to that
+worker, e.g. the HTTP listener that it provides (if any); logging
+configuration; etc.  You should minimise the number of overrides though to
+maintain a usable config.
 
-Currently, the `event_creator` and `federation_reader` workers require specifying
-`worker_replication_http_port`.
+In the config file for each worker, you must specify the type of worker
+application (`worker_app`). The currently available worker applications are
+listed below. You must also specify the replication endpoints that it should
+talk to on the main synapse process.  `worker_replication_host` should specify
+the host of the main synapse, `worker_replication_port` should point to the TCP
+replication listener port and `worker_replication_http_port` should point to
+the HTTP replication port.
 
-For instance:
+For example:
 
-    worker_app: synapse.app.synchrotron
+```yaml
+worker_app: synapse.app.synchrotron
 
-    # The replication listener on the synapse to talk to.
-    worker_replication_host: 127.0.0.1
-    worker_replication_port: 9092
-    worker_replication_http_port: 9093
+# The replication listener on the synapse to talk to.
+worker_replication_host: 127.0.0.1
+worker_replication_port: 9092
+worker_replication_http_port: 9093
 
-    worker_listeners:
-     - type: http
-       port: 8083
-       resources:
-         - names:
-           - client
+worker_listeners:
+ - type: http
+   port: 8083
+   resources:
+     - names:
+       - client
 
-    worker_daemonize: True
-    worker_pid_file: /home/matrix/synapse/synchrotron.pid
-    worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
+worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
+```
 
 ...is a full configuration for a synchrotron worker instance, which will expose a
 plain HTTP `/sync` endpoint on port 8083 separately from the `/sync` endpoint provided
@@ -101,7 +98,75 @@ by the main synapse.
 Obviously you should configure your reverse-proxy to route the relevant
 endpoints to the worker (`localhost:8083` in the above example).
 
-Finally, to actually run your worker-based synapse, you must pass synctl the -a
+Finally, you need to start your worker processes. This can be done with either
+`synctl` or your distribution's preferred service manager such as `systemd`. We
+recommend the use of `systemd` where available: for information on setting up
+`systemd` to start synapse workers, see
+[systemd-with-workers](systemd-with-workers). To use `synctl`, see below.
+
+### **Experimental** support for replication over redis
+
+As of Synapse v1.13.0, it is possible to configure Synapse to send replication
+via a [Redis pub/sub channel](https://redis.io/topics/pubsub). This is an
+alternative to direct TCP connections to the master: rather than all the
+workers connecting to the master, all the workers and the master connect to
+Redis, which relays replication commands between processes. This can give a
+significant cpu saving on the master and will be a prerequisite for upcoming
+performance improvements.
+
+Note that this support is currently experimental; you may experience lost
+messages and similar problems! It is strongly recommended that admins setting
+up workers for the first time use direct TCP replication as above.
+
+To configure Synapse to use Redis:
+
+1. Install Redis following the normal procedure for your distribution - for
+   example, on Debian, `apt install redis-server`. (It is safe to use an
+   existing Redis deployment if you have one: we use a pub/sub stream named
+   according to the `server_name` of your synapse server.)
+2. Check Redis is running and accessible: you should be able to `echo PING | nc -q1
+   localhost 6379` and get a response of `+PONG`.
+3. Install the python prerequisites. If you installed synapse into a
+   virtualenv, this can be done with:
+   ```sh
+   pip install matrix-synapse[redis]
+   ```
+   The debian packages from matrix.org already include the required
+   dependencies.
+4. Add config to the shared configuration (`homeserver.yaml`):
+    ```yaml
+    redis:
+      enabled: true
+    ```
+    Optional parameters which can go alongside `enabled` are `host`, `port`,
+    `password`. Normally none of these are required.
+5. Restart master and all workers.
+
+Once redis replication is in use, `worker_replication_port` is redundant and
+can be removed from the worker configuration files. Similarly, the
+configuration for the `listener` for the TCP replication port can be removed
+from the main configuration file. Note that the HTTP replication port is
+still required.
+
+### Using synctl
+
+If you want to use `synctl` to manage your synapse processes, you will need to
+create an an additional configuration file for the master synapse process. That
+configuration should look like this:
+
+```yaml
+worker_app: synapse.app.homeserver
+```
+
+Additionally, each worker app must be configured with the name of a "pid file",
+to which it will write its process ID when it starts. For example, for a
+synchrotron, you might write:
+
+```yaml
+worker_pid_file: /home/matrix/synapse/synchrotron.pid
+```
+
+Finally, to actually run your worker-based synapse, you must pass synctl the `-a`
 commandline option to tell it to operate on all the worker configurations found
 in the given directory, e.g.: