diff --git a/docs/admin_api/rooms.md b/docs/admin_api/rooms.md
index 26fe8b8679..624e7745ba 100644
--- a/docs/admin_api/rooms.md
+++ b/docs/admin_api/rooms.md
@@ -264,3 +264,57 @@ Response:
Once the `next_token` parameter is no longer present, we know we've reached the
end of the list.
+
+# DRAFT: Room Details API
+
+The Room Details admin API allows server admins to get all details of a room.
+
+This API is still a draft and details might change!
+
+The following fields are possible in the JSON response body:
+
+* `room_id` - The ID of the room.
+* `name` - The name of the room.
+* `canonical_alias` - The canonical (main) alias address of the room.
+* `joined_members` - How many users are currently in the room.
+* `joined_local_members` - How many local users are currently in the room.
+* `version` - The version of the room as a string.
+* `creator` - The `user_id` of the room creator.
+* `encryption` - Algorithm of end-to-end encryption of messages. Is `null` if encryption is not active.
+* `federatable` - Whether users on other servers can join this room.
+* `public` - Whether the room is visible in room directory.
+* `join_rules` - The type of rules used for users wishing to join this room. One of: ["public", "knock", "invite", "private"].
+* `guest_access` - Whether guests can join the room. One of: ["can_join", "forbidden"].
+* `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].
+* `state_events` - Total number of state_events of a room. Complexity of the room.
+
+## Usage
+
+A standard request:
+
+```
+GET /_synapse/admin/v1/rooms/<room_id>
+
+{}
+```
+
+Response:
+
+```
+{
+ "room_id": "!mscvqgqpHYjBGDxNym:matrix.org",
+ "name": "Music Theory",
+ "canonical_alias": "#musictheory:matrix.org",
+ "joined_members": 127
+ "joined_local_members": 2,
+ "version": "1",
+ "creator": "@foo:matrix.org",
+ "encryption": null,
+ "federatable": true,
+ "public": true,
+ "join_rules": "invite",
+ "guest_access": null,
+ "history_visibility": "shared",
+ "state_events": 93534
+}
+```
diff --git a/docs/dev/git.md b/docs/dev/git.md
new file mode 100644
index 0000000000..b747ff20c9
--- /dev/null
+++ b/docs/dev/git.md
@@ -0,0 +1,148 @@
+Some notes on how we use git
+============================
+
+On keeping the commit history clean
+-----------------------------------
+
+In an ideal world, our git commit history would be a linear progression of
+commits each of which contains a single change building on what came
+before. Here, by way of an arbitrary example, is the top of `git log --graph
+b2dba0607`:
+
+<img src="git/clean.png" alt="clean git graph" width="500px">
+
+Note how the commit comment explains clearly what is changing and why. Also
+note the *absence* of merge commits, as well as the absence of commits called
+things like (to pick a few culprits):
+[“pep8”](https://github.com/matrix-org/synapse/commit/84691da6c), [“fix broken
+test”](https://github.com/matrix-org/synapse/commit/474810d9d),
+[“oops”](https://github.com/matrix-org/synapse/commit/c9d72e457),
+[“typo”](https://github.com/matrix-org/synapse/commit/836358823), or [“Who's
+the president?”](https://github.com/matrix-org/synapse/commit/707374d5d).
+
+There are a number of reasons why keeping a clean commit history is a good
+thing:
+
+ * From time to time, after a change lands, it turns out to be necessary to
+ revert it, or to backport it to a release branch. Those operations are
+ *much* easier when the change is contained in a single commit.
+
+ * Similarly, it's much easier to answer questions like “is the fix for
+ `/publicRooms` on the release branch?” if that change consists of a single
+ commit.
+
+ * Likewise: “what has changed on this branch in the last week?” is much
+ clearer without merges and “pep8” commits everywhere.
+
+ * Sometimes we need to figure out where a bug got introduced, or some
+ behaviour changed. One way of doing that is with `git bisect`: pick an
+ arbitrary commit between the known good point and the known bad point, and
+ see how the code behaves. However, that strategy fails if the commit you
+ chose is the middle of someone's epic branch in which they broke the world
+ before putting it back together again.
+
+One counterargument is that it is sometimes useful to see how a PR evolved as
+it went through review cycles. This is true, but that information is always
+available via the GitHub UI (or via the little-known [refs/pull
+namespace](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/checking-out-pull-requests-locally)).
+
+
+Of course, in reality, things are more complicated than that. We have release
+branches as well as `develop` and `master`, and we deliberately merge changes
+between them. Bugs often slip through and have to be fixed later. That's all
+fine: this not a cast-iron rule which must be obeyed, but an ideal to aim
+towards.
+
+Merges, squashes, rebases: wtf?
+-------------------------------
+
+Ok, so that's what we'd like to achieve. How do we achieve it?
+
+The TL;DR is: when you come to merge a pull request, you *probably* want to
+“squash and merge”:
+
+.
+
+(This applies whether you are merging your own PR, or that of another
+contributor.)
+
+“Squash and merge”<sup id="a1">[1](#f1)</sup> takes all of the changes in the
+PR, and bundles them into a single commit. GitHub gives you the opportunity to
+edit the commit message before you confirm, and normally you should do so,
+because the default will be useless (again: `* woops typo` is not a useful
+thing to keep in the historical record).
+
+The main problem with this approach comes when you have a series of pull
+requests which build on top of one another: as soon as you squash-merge the
+first PR, you'll end up with a stack of conflicts to resolve in all of the
+others. In general, it's best to avoid this situation in the first place by
+trying not to have multiple related PRs in flight at the same time. Still,
+sometimes that's not possible and doing a regular merge is the lesser evil.
+
+Another occasion in which a regular merge makes more sense is a PR where you've
+deliberately created a series of commits each of which makes sense in its own
+right. For example: [a PR which gradually propagates a refactoring operation
+through the codebase](https://github.com/matrix-org/synapse/pull/6837), or [a
+PR which is the culmination of several other
+PRs](https://github.com/matrix-org/synapse/pull/5987). In this case the ability
+to figure out when a particular change/bug was introduced could be very useful.
+
+Ultimately: **this is not a hard-and-fast-rule**. If in doubt, ask yourself “do
+each of the commits I am about to merge make sense in their own right”, but
+remember that we're just doing our best to balance “keeping the commit history
+clean” with other factors.
+
+Git branching model
+-------------------
+
+A [lot](https://nvie.com/posts/a-successful-git-branching-model/)
+[of](http://scottchacon.com/2011/08/31/github-flow.html)
+[words](https://www.endoflineblog.com/gitflow-considered-harmful) have been
+written in the past about git branching models (no really, [a
+lot](https://martinfowler.com/articles/branching-patterns.html)). I tend to
+think the whole thing is overblown. Fundamentally, it's not that
+complicated. Here's how we do it.
+
+Let's start with a picture:
+
+
+
+It looks complicated, but it's really not. There's one basic rule: *anyone* is
+free to merge from *any* more-stable branch to *any* less-stable branch at
+*any* time<sup id="a2">[2](#f2)</sup>. (The principle behind this is that if a
+change is good enough for the more-stable branch, then it's also good enough go
+put in a less-stable branch.)
+
+Meanwhile, merging (or squashing, as per the above) from a less-stable to a
+more-stable branch is a deliberate action in which you want to publish a change
+or a set of changes to (some subset of) the world: for example, this happens
+when a PR is landed, or as part of our release process.
+
+So, what counts as a more- or less-stable branch? A little reflection will show
+that our active branches are ordered thus, from more-stable to less-stable:
+
+ * `master` (tracks our last release).
+ * `release-vX.Y.Z` (the branch where we prepare the next release)<sup
+ id="a3">[3](#f3)</sup>.
+ * PR branches which are targeting the release.
+ * `develop` (our "mainline" branch containing our bleeding-edge).
+ * regular PR branches.
+
+The corollary is: if you have a bugfix that needs to land in both
+`release-vX.Y.Z` *and* `develop`, then you should base your PR on
+`release-vX.Y.Z`, get it merged there, and then merge from `release-vX.Y.Z` to
+`develop`. (If a fix lands in `develop` and we later need it in a
+release-branch, we can of course cherry-pick it, but landing it in the release
+branch first helps reduce the chance of annoying conflicts.)
+
+---
+
+<b id="f1">[1]</b>: “Squash and merge” is GitHub's term for this
+operation. Given that there is no merge involved, I'm not convinced it's the
+most intuitive name. [^](#a1)
+
+<b id="f2">[2]</b>: Well, anyone with commit access.[^](#a2)
+
+<b id="f3">[3]</b>: Very, very occasionally (I think this has happened once in
+the history of Synapse), we've had two releases in flight at once. Obviously,
+`release-v1.2.3` is more-stable than `release-v1.3.0`. [^](#a3)
diff --git a/docs/dev/git/branches.jpg b/docs/dev/git/branches.jpg
new file mode 100644
index 0000000000..715ecc8cd0
--- /dev/null
+++ b/docs/dev/git/branches.jpg
Binary files differdiff --git a/docs/dev/git/clean.png b/docs/dev/git/clean.png
new file mode 100644
index 0000000000..3accd7ccef
--- /dev/null
+++ b/docs/dev/git/clean.png
Binary files differdiff --git a/docs/dev/git/squash.png b/docs/dev/git/squash.png
new file mode 100644
index 0000000000..234caca3e4
--- /dev/null
+++ b/docs/dev/git/squash.png
Binary files differdiff --git a/docs/dev/oidc.md b/docs/dev/oidc.md
new file mode 100644
index 0000000000..a90c5d2441
--- /dev/null
+++ b/docs/dev/oidc.md
@@ -0,0 +1,175 @@
+# How to test OpenID Connect
+
+Any OpenID Connect Provider (OP) should work with Synapse, as long as it supports the authorization code flow.
+There are a few options for that:
+
+ - start a local OP. Synapse has been tested with [Hydra][hydra] and [Dex][dex-idp].
+ Note that for an OP to work, it should be served under a secure (HTTPS) origin.
+ A certificate signed with a self-signed, locally trusted CA should work. In that case, start Synapse with a `SSL_CERT_FILE` environment variable set to the path of the CA.
+ - use a publicly available OP. Synapse has been tested with [Google][google-idp].
+ - setup a SaaS OP, like [Auth0][auth0] and [Okta][okta]. Auth0 has a free tier which has been tested with Synapse.
+
+[google-idp]: https://developers.google.com/identity/protocols/OpenIDConnect#authenticatingtheuser
+[auth0]: https://auth0.com/
+[okta]: https://www.okta.com/
+[dex-idp]: https://github.com/dexidp/dex
+[hydra]: https://www.ory.sh/docs/hydra/
+
+
+## Sample configs
+
+Here are a few configs for providers that should work with Synapse.
+
+### [Dex][dex-idp]
+
+[Dex][dex-idp] is a simple, open-source, certified OpenID Connect Provider.
+Although it is designed to help building a full-blown provider, with some external database, it can be configured with static passwords in a config file.
+
+Follow the [Getting Started guide](https://github.com/dexidp/dex/blob/master/Documentation/getting-started.md) to install Dex.
+
+Edit `examples/config-dev.yaml` config file from the Dex repo to add a client:
+
+```yaml
+staticClients:
+- id: synapse
+ secret: secret
+ redirectURIs:
+ - '[synapse base url]/_synapse/oidc/callback'
+ name: 'Synapse'
+```
+
+Run with `dex serve examples/config-dex.yaml`
+
+Synapse config:
+
+```yaml
+oidc_config:
+ enabled: true
+ skip_verification: true # This is needed as Dex is served on an insecure endpoint
+ issuer: "http://127.0.0.1:5556/dex"
+ discover: true
+ client_id: "synapse"
+ client_secret: "secret"
+ scopes:
+ - openid
+ - profile
+ user_mapping_provider:
+ config:
+ localpart_template: '{{ user.name }}'
+ display_name_template: '{{ user.name|capitalize }}'
+```
+
+### [Auth0][auth0]
+
+1. Create a regular web application for Synapse
+2. Set the Allowed Callback URLs to `[synapse base url]/_synapse/oidc/callback`
+3. Add a rule to add the `preferred_username` claim.
+ <details>
+ <summary>Code sample</summary>
+
+ ```js
+ function addPersistenceAttribute(user, context, callback) {
+ user.user_metadata = user.user_metadata || {};
+ user.user_metadata.preferred_username = user.user_metadata.preferred_username || user.user_id;
+ context.idToken.preferred_username = user.user_metadata.preferred_username;
+
+ auth0.users.updateUserMetadata(user.user_id, user.user_metadata)
+ .then(function(){
+ callback(null, user, context);
+ })
+ .catch(function(err){
+ callback(err);
+ });
+ }
+ ```
+
+ </details>
+
+
+```yaml
+oidc_config:
+ enabled: true
+ issuer: "https://your-tier.eu.auth0.com/" # TO BE FILLED
+ discover: true
+ client_id: "your-client-id" # TO BE FILLED
+ client_secret: "your-client-secret" # TO BE FILLED
+ scopes:
+ - openid
+ - profile
+ user_mapping_provider:
+ config:
+ localpart_template: '{{ user.preferred_username }}'
+ display_name_template: '{{ user.name }}'
+```
+
+### GitHub
+
+GitHub is a bit special as it is not an OpenID Connect compliant provider, but just a regular OAuth2 provider.
+The `/user` API endpoint can be used to retrieve informations from the user.
+As the OIDC login mechanism needs an attribute to uniquely identify users and that endpoint does not return a `sub` property, an alternative `subject_claim` has to be set.
+
+1. Create a new OAuth application: https://github.com/settings/applications/new
+2. Set the callback URL to `[synapse base url]/_synapse/oidc/callback`
+
+```yaml
+oidc_config:
+ enabled: true
+ issuer: "https://github.com/"
+ discover: false
+ client_id: "your-client-id" # TO BE FILLED
+ client_secret: "your-client-secret" # TO BE FILLED
+ authorization_endpoint: "https://github.com/login/oauth/authorize"
+ token_endpoint: "https://github.com/login/oauth/access_token"
+ userinfo_endpoint: "https://api.github.com/user"
+ scopes:
+ - read:user
+ user_mapping_provider:
+ config:
+ subject_claim: 'id'
+ localpart_template: '{{ user.login }}'
+ display_name_template: '{{ user.name }}'
+```
+
+### Google
+
+1. Setup a project in the Google API Console
+2. Obtain the OAuth 2.0 credentials (see <https://developers.google.com/identity/protocols/oauth2/openid-connect>)
+3. Add this Authorized redirect URI: `[synapse base url]/_synapse/oidc/callback`
+
+```yaml
+oidc_config:
+ enabled: true
+ issuer: "https://accounts.google.com/"
+ discover: true
+ client_id: "your-client-id" # TO BE FILLED
+ client_secret: "your-client-secret" # TO BE FILLED
+ scopes:
+ - openid
+ - profile
+ user_mapping_provider:
+ config:
+ localpart_template: '{{ user.given_name|lower }}'
+ display_name_template: '{{ user.name }}'
+```
+
+### Twitch
+
+1. Setup a developer account on [Twitch](https://dev.twitch.tv/)
+2. Obtain the OAuth 2.0 credentials by [creating an app](https://dev.twitch.tv/console/apps/)
+3. Add this OAuth Redirect URL: `[synapse base url]/_synapse/oidc/callback`
+
+```yaml
+oidc_config:
+ enabled: true
+ issuer: "https://id.twitch.tv/oauth2/"
+ discover: true
+ client_id: "your-client-id" # TO BE FILLED
+ client_secret: "your-client-secret" # TO BE FILLED
+ client_auth_method: "client_secret_post"
+ scopes:
+ - openid
+ user_mapping_provider:
+ config:
+ localpart_template: '{{ user.preferred_username }}'
+ display_name_template: '{{ user.name }}'
+```
diff --git a/docs/reverse_proxy.md b/docs/reverse_proxy.md
index c7222f73b9..cbb8269568 100644
--- a/docs/reverse_proxy.md
+++ b/docs/reverse_proxy.md
@@ -9,7 +9,7 @@ of doing so is that it means that you can expose the default https port
(443) to Matrix clients without needing to run Synapse with root
privileges.
-> **NOTE**: Your reverse proxy must not `canonicalise` or `normalise`
+**NOTE**: Your reverse proxy must not `canonicalise` or `normalise`
the requested URI in any way (for example, by decoding `%xx` escapes).
Beware that Apache *will* canonicalise URIs unless you specifify
`nocanon`.
@@ -18,7 +18,7 @@ When setting up a reverse proxy, remember that Matrix clients and other
Matrix servers do not necessarily need to connect to your server via the
same server name or port. Indeed, clients will use port 443 by default,
whereas servers default to port 8448. Where these are different, we
-refer to the 'client port' and the \'federation port\'. See [the Matrix
+refer to the 'client port' and the 'federation port'. See [the Matrix
specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names)
for more details of the algorithm used for federation connections, and
[delegate.md](<delegate.md>) for instructions on setting up delegation.
@@ -28,93 +28,113 @@ Let's assume that we expect clients to connect to our server at
`https://example.com:8448`. The following sections detail the configuration of
the reverse proxy and the homeserver.
-## Webserver configuration examples
+## Reverse-proxy configuration examples
-> **NOTE**: You only need one of these.
+**NOTE**: You only need one of these.
### nginx
- server {
- listen 443 ssl;
- listen [::]:443 ssl;
- server_name matrix.example.com;
-
- location /_matrix {
- proxy_pass http://localhost:8008;
- proxy_set_header X-Forwarded-For $remote_addr;
- # Nginx by default only allows file uploads up to 1M in size
- # Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
- client_max_body_size 10M;
- }
- }
-
- server {
- listen 8448 ssl default_server;
- listen [::]:8448 ssl default_server;
- server_name example.com;
-
- location / {
- proxy_pass http://localhost:8008;
- proxy_set_header X-Forwarded-For $remote_addr;
- }
- }
-
-> **NOTE**: Do not add a `/` after the port in `proxy_pass`, otherwise nginx will
+```
+server {
+ listen 443 ssl;
+ listen [::]:443 ssl;
+ server_name matrix.example.com;
+
+ location /_matrix {
+ proxy_pass http://localhost:8008;
+ proxy_set_header X-Forwarded-For $remote_addr;
+ # Nginx by default only allows file uploads up to 1M in size
+ # Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
+ client_max_body_size 10M;
+ }
+}
+
+server {
+ listen 8448 ssl default_server;
+ listen [::]:8448 ssl default_server;
+ server_name example.com;
+
+ location / {
+ proxy_pass http://localhost:8008;
+ proxy_set_header X-Forwarded-For $remote_addr;
+ }
+}
+```
+
+**NOTE**: Do not add a path after the port in `proxy_pass`, otherwise nginx will
canonicalise/normalise the URI.
-### Caddy
+### Caddy 1
- matrix.example.com {
- proxy /_matrix http://localhost:8008 {
- transparent
- }
- }
+```
+matrix.example.com {
+ proxy /_matrix http://localhost:8008 {
+ transparent
+ }
+}
- example.com:8448 {
- proxy / http://localhost:8008 {
- transparent
- }
- }
+example.com:8448 {
+ proxy / http://localhost:8008 {
+ transparent
+ }
+}
+```
+
+### Caddy 2
+
+```
+matrix.example.com {
+ reverse_proxy /_matrix/* http://localhost:8008
+}
+
+example.com:8448 {
+ reverse_proxy http://localhost:8008
+}
+```
### Apache
- <VirtualHost *:443>
- SSLEngine on
- ServerName matrix.example.com;
+```
+<VirtualHost *:443>
+ SSLEngine on
+ ServerName matrix.example.com;
- AllowEncodedSlashes NoDecode
- ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
- ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
- </VirtualHost>
+ AllowEncodedSlashes NoDecode
+ ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
+ ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
+</VirtualHost>
- <VirtualHost *:8448>
- SSLEngine on
- ServerName example.com;
+<VirtualHost *:8448>
+ SSLEngine on
+ ServerName example.com;
- AllowEncodedSlashes NoDecode
- ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
- ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
- </VirtualHost>
+ AllowEncodedSlashes NoDecode
+ ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
+ ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
+</VirtualHost>
+```
-> **NOTE**: ensure the `nocanon` options are included.
+**NOTE**: ensure the `nocanon` options are included.
### HAProxy
- frontend https
- bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
+```
+frontend https
+ bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
- # Matrix client traffic
- acl matrix-host hdr(host) -i matrix.example.com
- acl matrix-path path_beg /_matrix
+ # Matrix client traffic
+ acl matrix-host hdr(host) -i matrix.example.com
+ acl matrix-path path_beg /_matrix
- use_backend matrix if matrix-host matrix-path
+ use_backend matrix if matrix-host matrix-path
- frontend matrix-federation
- bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
- default_backend matrix
+frontend matrix-federation
+ bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
+ default_backend matrix
- backend matrix
- server matrix 127.0.0.1:8008
+backend matrix
+ server matrix 127.0.0.1:8008
+```
## Homeserver Configuration
diff --git a/docs/saml_mapping_providers.md b/docs/saml_mapping_providers.md
deleted file mode 100644
index 92f2380488..0000000000
--- a/docs/saml_mapping_providers.md
+++ /dev/null
@@ -1,77 +0,0 @@
-# SAML Mapping Providers
-
-A SAML mapping provider is a Python class (loaded via a Python module) that
-works out how to map attributes of a SAML response object to Matrix-specific
-user attributes. Details such as user ID localpart, displayname, and even avatar
-URLs are all things that can be mapped from talking to a SSO service.
-
-As an example, a SSO service may return the email address
-"john.smith@example.com" for a user, whereas Synapse will need to figure out how
-to turn that into a displayname when creating a Matrix user for this individual.
-It may choose `John Smith`, or `Smith, John [Example.com]` or any number of
-variations. As each Synapse configuration may want something different, this is
-where SAML mapping providers come into play.
-
-## Enabling Providers
-
-External mapping providers are provided to Synapse in the form of an external
-Python module. Retrieve this module from [PyPi](https://pypi.org) or elsewhere,
-then tell Synapse where to look for the handler class by editing the
-`saml2_config.user_mapping_provider.module` config option.
-
-`saml2_config.user_mapping_provider.config` allows you to provide custom
-configuration options to the module. Check with the module's documentation for
-what options it provides (if any). The options listed by default are for the
-user mapping provider built in to Synapse. If using a custom module, you should
-comment these options out and use those specified by the module instead.
-
-## Building a Custom Mapping Provider
-
-A custom mapping provider must specify the following methods:
-
-* `__init__(self, parsed_config)`
- - Arguments:
- - `parsed_config` - A configuration object that is the return value of the
- `parse_config` method. You should set any configuration options needed by
- the module here.
-* `saml_response_to_user_attributes(self, saml_response, failures)`
- - Arguments:
- - `saml_response` - A `saml2.response.AuthnResponse` object to extract user
- information from.
- - `failures` - An `int` that represents the amount of times the returned
- mxid localpart mapping has failed. This should be used
- to create a deduplicated mxid localpart which should be
- returned instead. For example, if this method returns
- `john.doe` as the value of `mxid_localpart` in the returned
- dict, and that is already taken on the homeserver, this
- method will be called again with the same parameters but
- with failures=1. The method should then return a different
- `mxid_localpart` value, such as `john.doe1`.
- - This method must return a dictionary, which will then be used by Synapse
- to build a new user. The following keys are allowed:
- * `mxid_localpart` - Required. The mxid localpart of the new user.
- * `displayname` - The displayname of the new user. If not provided, will default to
- the value of `mxid_localpart`.
-* `parse_config(config)`
- - This method should have the `@staticmethod` decoration.
- - Arguments:
- - `config` - A `dict` representing the parsed content of the
- `saml2_config.user_mapping_provider.config` homeserver config option.
- Runs on homeserver startup. Providers should extract any option values
- they need here.
- - Whatever is returned will be passed back to the user mapping provider module's
- `__init__` method during construction.
-* `get_saml_attributes(config)`
- - This method should have the `@staticmethod` decoration.
- - Arguments:
- - `config` - A object resulting from a call to `parse_config`.
- - Returns a tuple of two sets. The first set equates to the saml auth
- response attributes that are required for the module to function, whereas
- the second set consists of those attributes which can be used if available,
- but are not necessary.
-
-## Synapse's Default Provider
-
-Synapse has a built-in SAML mapping provider if a custom provider isn't
-specified in the config. It is located at
-[`synapse.handlers.saml_handler.DefaultSamlMappingProvider`](../synapse/handlers/saml_handler.py).
diff --git a/docs/sample_config.yaml b/docs/sample_config.yaml
index 98ead7dc0e..8a8415b9a2 100644
--- a/docs/sample_config.yaml
+++ b/docs/sample_config.yaml
@@ -603,6 +603,45 @@ acme:
+## Caching ##
+
+# Caching can be configured through the following options.
+#
+# A cache 'factor' is a multiplier that can be applied to each of
+# Synapse's caches in order to increase or decrease the maximum
+# number of entries that can be stored.
+
+# The number of events to cache in memory. Not affected by
+# caches.global_factor.
+#
+#event_cache_size: 10K
+
+caches:
+ # Controls the global cache factor, which is the default cache factor
+ # for all caches if a specific factor for that cache is not otherwise
+ # set.
+ #
+ # This can also be set by the "SYNAPSE_CACHE_FACTOR" environment
+ # variable. Setting by environment variable takes priority over
+ # setting through the config file.
+ #
+ # Defaults to 0.5, which will half the size of all caches.
+ #
+ #global_factor: 1.0
+
+ # A dictionary of cache name to cache factor for that individual
+ # cache. Overrides the global cache factor for a given cache.
+ #
+ # These can also be set through environment variables comprised
+ # of "SYNAPSE_CACHE_FACTOR_" + the name of the cache in capital
+ # letters and underscores. Setting by environment variable
+ # takes priority over setting through the config file.
+ # Ex. SYNAPSE_CACHE_FACTOR_GET_USERS_WHO_SHARE_ROOM_WITH_USER=2.0
+ #
+ per_cache_factors:
+ #get_users_who_share_room_with_user: 2.0
+
+
## Database ##
# The 'database' setting defines the database that synapse uses to store all of
@@ -646,10 +685,6 @@ database:
args:
database: DATADIR/homeserver.db
-# Number of events to cache in memory.
-#
-#event_cache_size: 10K
-
## Logging ##
@@ -1470,6 +1505,94 @@ saml2_config:
#template_dir: "res/templates"
+# Enable OpenID Connect for registration and login. Uses authlib.
+#
+oidc_config:
+ # enable OpenID Connect. Defaults to false.
+ #
+ #enabled: true
+
+ # use the OIDC discovery mechanism to discover endpoints. Defaults to true.
+ #
+ #discover: true
+
+ # the OIDC issuer. Used to validate tokens and discover the providers endpoints. Required.
+ #
+ #issuer: "https://accounts.example.com/"
+
+ # oauth2 client id to use. Required.
+ #
+ #client_id: "provided-by-your-issuer"
+
+ # oauth2 client secret to use. Required.
+ #
+ #client_secret: "provided-by-your-issuer"
+
+ # auth method to use when exchanging the token.
+ # Valid values are "client_secret_basic" (default), "client_secret_post" and "none".
+ #
+ #client_auth_method: "client_auth_basic"
+
+ # list of scopes to ask. This should include the "openid" scope. Defaults to ["openid"].
+ #
+ #scopes: ["openid"]
+
+ # the oauth2 authorization endpoint. Required if provider discovery is disabled.
+ #
+ #authorization_endpoint: "https://accounts.example.com/oauth2/auth"
+
+ # the oauth2 token endpoint. Required if provider discovery is disabled.
+ #
+ #token_endpoint: "https://accounts.example.com/oauth2/token"
+
+ # the OIDC userinfo endpoint. Required if discovery is disabled and the "openid" scope is not asked.
+ #
+ #userinfo_endpoint: "https://accounts.example.com/userinfo"
+
+ # URI where to fetch the JWKS. Required if discovery is disabled and the "openid" scope is used.
+ #
+ #jwks_uri: "https://accounts.example.com/.well-known/jwks.json"
+
+ # skip metadata verification. Defaults to false.
+ # Use this if you are connecting to a provider that is not OpenID Connect compliant.
+ # Avoid this in production.
+ #
+ #skip_verification: false
+
+
+ # An external module can be provided here as a custom solution to mapping
+ # attributes returned from a OIDC provider onto a matrix user.
+ #
+ user_mapping_provider:
+ # The custom module's class. Uncomment to use a custom module.
+ # Default is 'synapse.handlers.oidc_handler.JinjaOidcMappingProvider'.
+ #
+ #module: mapping_provider.OidcMappingProvider
+
+ # Custom configuration values for the module. Below options are intended
+ # for the built-in provider, they should be changed if using a custom
+ # module. This section will be passed as a Python dictionary to the
+ # module's `parse_config` method.
+ #
+ # Below is the config of the default mapping provider, based on Jinja2
+ # templates. Those templates are used to render user attributes, where the
+ # userinfo object is available through the `user` variable.
+ #
+ config:
+ # name of the claim containing a unique identifier for the user.
+ # Defaults to `sub`, which OpenID Connect compliant providers should provide.
+ #
+ #subject_claim: "sub"
+
+ # Jinja2 template for the localpart of the MXID
+ #
+ localpart_template: "{{ user.preferred_username }}"
+
+ # Jinja2 template for the display name to set on first login. Optional.
+ #
+ #display_name_template: "{{ user.given_name }} {{ user.last_name }}"
+
+
# Enable CAS for registration and login.
#
@@ -1554,6 +1677,13 @@ sso:
#
# This template has no additional variables.
#
+ # * HTML page to display to users if something goes wrong during the
+ # OpenID Connect authentication process: 'sso_error.html'.
+ #
+ # When rendering, this template is given two variables:
+ # * error: the technical name of the error
+ # * error_description: a human-readable message for the error
+ #
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
@@ -1772,10 +1902,17 @@ password_providers:
# include_content: true
-#spam_checker:
-# module: "my_custom_project.SuperSpamChecker"
-# config:
-# example_option: 'things'
+# Spam checkers are third-party modules that can block specific actions
+# of local users, such as creating rooms and registering undesirable
+# usernames, as well as remote users by redacting incoming events.
+#
+spam_checker:
+ #- module: "my_custom_project.SuperSpamChecker"
+ # config:
+ # example_option: 'things'
+ #- module: "some_other_project.BadEventStopper"
+ # config:
+ # example_stop_events_from: ['@bad:example.com']
# Uncomment to allow non-server-admin users to create groups on this server
diff --git a/docs/spam_checker.md b/docs/spam_checker.md
index 5b5f5000b7..eb10e115f9 100644
--- a/docs/spam_checker.md
+++ b/docs/spam_checker.md
@@ -64,10 +64,12 @@ class ExampleSpamChecker:
Modify the `spam_checker` section of your `homeserver.yaml` in the following
manner:
-`module` should point to the fully qualified Python class that implements your
-custom logic, e.g. `my_module.ExampleSpamChecker`.
+Create a list entry with the keys `module` and `config`.
-`config` is a dictionary that gets passed to the spam checker class.
+* `module` should point to the fully qualified Python class that implements your
+ custom logic, e.g. `my_module.ExampleSpamChecker`.
+
+* `config` is a dictionary that gets passed to the spam checker class.
### Example
@@ -75,12 +77,15 @@ This section might look like:
```yaml
spam_checker:
- module: my_module.ExampleSpamChecker
- config:
- # Enable or disable a specific option in ExampleSpamChecker.
- my_custom_option: true
+ - module: my_module.ExampleSpamChecker
+ config:
+ # Enable or disable a specific option in ExampleSpamChecker.
+ my_custom_option: true
```
+More spam checkers can be added in tandem by appending more items to the list. An
+action is blocked when at least one of the configured spam checkers flags it.
+
## Examples
The [Mjolnir](https://github.com/matrix-org/mjolnir) project is a full fledged
diff --git a/docs/sso_mapping_providers.md b/docs/sso_mapping_providers.md
new file mode 100644
index 0000000000..4cd3a568f2
--- /dev/null
+++ b/docs/sso_mapping_providers.md
@@ -0,0 +1,146 @@
+# SSO Mapping Providers
+
+A mapping provider is a Python class (loaded via a Python module) that
+works out how to map attributes of a SSO response to Matrix-specific
+user attributes. Details such as user ID localpart, displayname, and even avatar
+URLs are all things that can be mapped from talking to a SSO service.
+
+As an example, a SSO service may return the email address
+"john.smith@example.com" for a user, whereas Synapse will need to figure out how
+to turn that into a displayname when creating a Matrix user for this individual.
+It may choose `John Smith`, or `Smith, John [Example.com]` or any number of
+variations. As each Synapse configuration may want something different, this is
+where SAML mapping providers come into play.
+
+SSO mapping providers are currently supported for OpenID and SAML SSO
+configurations. Please see the details below for how to implement your own.
+
+External mapping providers are provided to Synapse in the form of an external
+Python module. You can retrieve this module from [PyPi](https://pypi.org) or elsewhere,
+but it must be importable via Synapse (e.g. it must be in the same virtualenv
+as Synapse). The Synapse config is then modified to point to the mapping provider
+(and optionally provide additional configuration for it).
+
+## OpenID Mapping Providers
+
+The OpenID mapping provider can be customized by editing the
+`oidc_config.user_mapping_provider.module` config option.
+
+`oidc_config.user_mapping_provider.config` allows you to provide custom
+configuration options to the module. Check with the module's documentation for
+what options it provides (if any). The options listed by default are for the
+user mapping provider built in to Synapse. If using a custom module, you should
+comment these options out and use those specified by the module instead.
+
+### Building a Custom OpenID Mapping Provider
+
+A custom mapping provider must specify the following methods:
+
+* `__init__(self, parsed_config)`
+ - Arguments:
+ - `parsed_config` - A configuration object that is the return value of the
+ `parse_config` method. You should set any configuration options needed by
+ the module here.
+* `parse_config(config)`
+ - This method should have the `@staticmethod` decoration.
+ - Arguments:
+ - `config` - A `dict` representing the parsed content of the
+ `oidc_config.user_mapping_provider.config` homeserver config option.
+ Runs on homeserver startup. Providers should extract and validate
+ any option values they need here.
+ - Whatever is returned will be passed back to the user mapping provider module's
+ `__init__` method during construction.
+* `get_remote_user_id(self, userinfo)`
+ - Arguments:
+ - `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
+ information from.
+ - This method must return a string, which is the unique identifier for the
+ user. Commonly the ``sub`` claim of the response.
+* `map_user_attributes(self, userinfo, token)`
+ - This method should be async.
+ - Arguments:
+ - `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
+ information from.
+ - `token` - A dictionary which includes information necessary to make
+ further requests to the OpenID provider.
+ - Returns a dictionary with two keys:
+ - localpart: A required string, used to generate the Matrix ID.
+ - displayname: An optional string, the display name for the user.
+
+### Default OpenID Mapping Provider
+
+Synapse has a built-in OpenID mapping provider if a custom provider isn't
+specified in the config. It is located at
+[`synapse.handlers.oidc_handler.JinjaOidcMappingProvider`](../synapse/handlers/oidc_handler.py).
+
+## SAML Mapping Providers
+
+The SAML mapping provider can be customized by editing the
+`saml2_config.user_mapping_provider.module` config option.
+
+`saml2_config.user_mapping_provider.config` allows you to provide custom
+configuration options to the module. Check with the module's documentation for
+what options it provides (if any). The options listed by default are for the
+user mapping provider built in to Synapse. If using a custom module, you should
+comment these options out and use those specified by the module instead.
+
+### Building a Custom SAML Mapping Provider
+
+A custom mapping provider must specify the following methods:
+
+* `__init__(self, parsed_config)`
+ - Arguments:
+ - `parsed_config` - A configuration object that is the return value of the
+ `parse_config` method. You should set any configuration options needed by
+ the module here.
+* `parse_config(config)`
+ - This method should have the `@staticmethod` decoration.
+ - Arguments:
+ - `config` - A `dict` representing the parsed content of the
+ `saml_config.user_mapping_provider.config` homeserver config option.
+ Runs on homeserver startup. Providers should extract and validate
+ any option values they need here.
+ - Whatever is returned will be passed back to the user mapping provider module's
+ `__init__` method during construction.
+* `get_saml_attributes(config)`
+ - This method should have the `@staticmethod` decoration.
+ - Arguments:
+ - `config` - A object resulting from a call to `parse_config`.
+ - Returns a tuple of two sets. The first set equates to the SAML auth
+ response attributes that are required for the module to function, whereas
+ the second set consists of those attributes which can be used if available,
+ but are not necessary.
+* `get_remote_user_id(self, saml_response, client_redirect_url)`
+ - Arguments:
+ - `saml_response` - A `saml2.response.AuthnResponse` object to extract user
+ information from.
+ - `client_redirect_url` - A string, the URL that the client will be
+ redirected to.
+ - This method must return a string, which is the unique identifier for the
+ user. Commonly the ``uid`` claim of the response.
+* `saml_response_to_user_attributes(self, saml_response, failures, client_redirect_url)`
+ - Arguments:
+ - `saml_response` - A `saml2.response.AuthnResponse` object to extract user
+ information from.
+ - `failures` - An `int` that represents the amount of times the returned
+ mxid localpart mapping has failed. This should be used
+ to create a deduplicated mxid localpart which should be
+ returned instead. For example, if this method returns
+ `john.doe` as the value of `mxid_localpart` in the returned
+ dict, and that is already taken on the homeserver, this
+ method will be called again with the same parameters but
+ with failures=1. The method should then return a different
+ `mxid_localpart` value, such as `john.doe1`.
+ - `client_redirect_url` - A string, the URL that the client will be
+ redirected to.
+ - This method must return a dictionary, which will then be used by Synapse
+ to build a new user. The following keys are allowed:
+ * `mxid_localpart` - Required. The mxid localpart of the new user.
+ * `displayname` - The displayname of the new user. If not provided, will default to
+ the value of `mxid_localpart`.
+
+### Default SAML Mapping Provider
+
+Synapse has a built-in SAML mapping provider if a custom provider isn't
+specified in the config. It is located at
+[`synapse.handlers.saml_handler.DefaultSamlMappingProvider`](../synapse/handlers/saml_handler.py).
diff --git a/docs/tcp_replication.md b/docs/tcp_replication.md
index ab2fffbfe4..db318baa9d 100644
--- a/docs/tcp_replication.md
+++ b/docs/tcp_replication.md
@@ -219,10 +219,6 @@ Asks the server for the current position of all streams.
Inform the server a pusher should be removed
-#### INVALIDATE_CACHE (C)
-
- Inform the server a cache should be invalidated
-
### REMOTE_SERVER_UP (S, C)
Inform other processes that a remote server may have come back online.
|