From 5b98526363a34c5c94d4ede0e8c44d262c085b78 Mon Sep 17 00:00:00 2001 From: anoadragon453 Date: Thu, 3 Jun 2021 16:21:02 +0000 Subject: deploy: fd9856e4a98fb3fa9c139317b0a3b79f22aff1c7 --- develop/print.html | 12357 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 12357 insertions(+) create mode 100644 develop/print.html (limited to 'develop/print.html') diff --git a/develop/print.html b/develop/print.html new file mode 100644 index 0000000000..c6f2515634 --- /dev/null +++ b/develop/print.html @@ -0,0 +1,12357 @@ + + + + + + Synapse + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + + + + +
+
+ +
+ +
+ +

Introduction

+

Welcome to the documentation repository for Synapse, the reference +Matrix homeserver implementation.

+
+

Installation Instructions

+

There are 3 steps to follow under Installation Instructions.

+ +

Choosing your server name

+

It is important to choose the name for your server before you install Synapse, +because it cannot be changed later.

+

The server name determines the "domain" part of user-ids for users on your +server: these will all be of the format @user:my.domain.name. It also +determines how other matrix servers will reach yours for federation.

+

For a test configuration, set this to the hostname of your server. For a more +production-ready setup, you will probably want to specify your domain +(example.com) rather than a matrix-specific hostname here (in the same way +that your email address is probably user@example.com rather than +user@email.example.com) - but doing so may require more advanced setup: see +Setting up Federation.

+

Installing Synapse

+

Installing from source

+

(Prebuilt packages are available for some platforms - see Prebuilt packages.)

+

When installing from source please make sure that the Platform-specific prerequisites are already installed.

+

System requirements:

+
    +
  • POSIX-compliant system (tested on Linux & OS X)
  • +
  • Python 3.5.2 or later, up to Python 3.9.
  • +
  • At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
  • +
+

To install the Synapse homeserver run:

+
mkdir -p ~/synapse
+virtualenv -p python3 ~/synapse/env
+source ~/synapse/env/bin/activate
+pip install --upgrade pip
+pip install --upgrade setuptools
+pip install matrix-synapse
+
+

This will download Synapse from PyPI +and install it, along with the python libraries it uses, into a virtual environment +under ~/synapse/env. Feel free to pick a different directory if you +prefer.

+

This Synapse installation can then be later upgraded by using pip again with the +update flag:

+
source ~/synapse/env/bin/activate
+pip install -U matrix-synapse
+
+

Before you can start Synapse, you will need to generate a configuration +file. To do this, run (in your virtualenv, as before):

+
cd ~/synapse
+python -m synapse.app.homeserver \
+    --server-name my.domain.name \
+    --config-path homeserver.yaml \
+    --generate-config \
+    --report-stats=[yes|no]
+
+

... substituting an appropriate value for --server-name.

+

This command will generate you a config file that you can then customise, but it will +also generate a set of keys for you. These keys will allow your homeserver to +identify itself to other homeserver, so don't lose or delete them. It would be +wise to back them up somewhere safe. (If, for whatever reason, you do need to +change your homeserver's keys, you may find that other homeserver have the +old key cached. If you update the signing key, you should change the name of the +key in the <server name>.signing.key file (the second word) to something +different. See the spec for more information on key management).

+

To actually run your new homeserver, pick a working directory for Synapse to +run (e.g. ~/synapse), and:

+
cd ~/synapse
+source env/bin/activate
+synctl start
+
+

Platform-specific prerequisites

+

Synapse is written in Python but some of the libraries it uses are written in +C. So before we can install Synapse itself we need a working C compiler and the +header files for Python C extensions.

+
Debian/Ubuntu/Raspbian
+

Installing prerequisites on Ubuntu or Debian:

+
sudo apt install build-essential python3-dev libffi-dev \
+                     python3-pip python3-setuptools sqlite3 \
+                     libssl-dev virtualenv libjpeg-dev libxslt1-dev
+
+
ArchLinux
+

Installing prerequisites on ArchLinux:

+
sudo pacman -S base-devel python python-pip \
+               python-setuptools python-virtualenv sqlite3
+
+
CentOS/Fedora
+

Installing prerequisites on CentOS or Fedora Linux:

+
sudo dnf install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
+                 libwebp-devel libxml2-devel libxslt-devel libpq-devel \
+                 python3-virtualenv libffi-devel openssl-devel python3-devel
+sudo dnf groupinstall "Development Tools"
+
+
macOS
+

Installing prerequisites on macOS:

+
xcode-select --install
+sudo easy_install pip
+sudo pip install virtualenv
+brew install pkg-config libffi
+
+

On macOS Catalina (10.15) you may need to explicitly install OpenSSL +via brew and inform pip about it so that psycopg2 builds:

+
brew install openssl@1.1
+export LDFLAGS="-L/usr/local/opt/openssl/lib"
+export CPPFLAGS="-I/usr/local/opt/openssl/include"
+
+
OpenSUSE
+

Installing prerequisites on openSUSE:

+
sudo zypper in -t pattern devel_basis
+sudo zypper in python-pip python-setuptools sqlite3 python-virtualenv \
+               python-devel libffi-devel libopenssl-devel libjpeg62-devel
+
+
OpenBSD
+

A port of Synapse is available under net/synapse. The filesystem +underlying the homeserver directory (defaults to /var/synapse) has to be +mounted with wxallowed (cf. mount(8)), so creating a separate filesystem +and mounting it to /var/synapse should be taken into consideration.

+

To be able to build Synapse's dependency on python the WRKOBJDIR +(cf. bsd.port.mk(5)) for building python, too, needs to be on a filesystem +mounted with wxallowed (cf. mount(8)).

+

Creating a WRKOBJDIR for building python under /usr/local (which on a +default OpenBSD installation is mounted with wxallowed):

+
doas mkdir /usr/local/pobj_wxallowed
+
+

Assuming PORTS_PRIVSEP=Yes (cf. bsd.port.mk(5)) and SUDO=doas are +configured in /etc/mk.conf:

+
doas chown _pbuild:_pbuild /usr/local/pobj_wxallowed
+
+

Setting the WRKOBJDIR for building python:

+
echo WRKOBJDIR_lang/python/3.7=/usr/local/pobj_wxallowed  \\nWRKOBJDIR_lang/python/2.7=/usr/local/pobj_wxallowed >> /etc/mk.conf
+
+

Building Synapse:

+
cd /usr/ports/net/synapse
+make install
+
+
Windows
+

If you wish to run or develop Synapse on Windows, the Windows Subsystem For +Linux provides a Linux environment on Windows 10 which is capable of using the +Debian, Fedora, or source installation methods. More information about WSL can +be found at https://docs.microsoft.com/en-us/windows/wsl/install-win10 for +Windows 10 and https://docs.microsoft.com/en-us/windows/wsl/install-on-server +for Windows Server.

+

Prebuilt packages

+

As an alternative to installing from source, prebuilt packages are available +for a number of platforms.

+

Docker images and Ansible playbooks

+

There is an official synapse image available at +https://hub.docker.com/r/matrixdotorg/synapse which can be used with +the docker-compose file available at contrib/docker. Further +information on this including configuration options is available in the README +on hub.docker.com.

+

Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a +Dockerfile to automate a synapse server in a single Docker image, at +https://hub.docker.com/r/avhost/docker-matrix/tags/

+

Slavi Pantaleev has created an Ansible playbook, +which installs the offical Docker image of Matrix Synapse +along with many other Matrix-related services (Postgres database, Element, coturn, +ma1sd, SSL support, etc.). +For more details, see +https://github.com/spantaleev/matrix-docker-ansible-deploy

+

Debian/Ubuntu

+
Matrix.org packages
+

Matrix.org provides Debian/Ubuntu packages of the latest stable version of +Synapse via https://packages.matrix.org/debian/. They are available for Debian +9 (Stretch), Ubuntu 16.04 (Xenial), and later. To use them:

+
sudo apt install -y lsb-release wget apt-transport-https
+sudo wget -O /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
+echo "deb [signed-by=/usr/share/keyrings/matrix-org-archive-keyring.gpg] https://packages.matrix.org/debian/ $(lsb_release -cs) main" |
+    sudo tee /etc/apt/sources.list.d/matrix-org.list
+sudo apt update
+sudo apt install matrix-synapse-py3
+
+

Note: if you followed a previous version of these instructions which +recommended using apt-key add to add an old key from +https://matrix.org/packages/debian/, you should note that this key has been +revoked. You should remove the old key with sudo apt-key remove C35EB17E1EAE708E6603A9B3AD0592FE47F0DF61, and follow the above instructions to +update your configuration.

+

The fingerprint of the repository signing key (as shown by gpg /usr/share/keyrings/matrix-org-archive-keyring.gpg) is +AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058.

+
Downstream Debian packages
+

We do not recommend using the packages from the default Debian buster +repository at this time, as they are old and suffer from known security +vulnerabilities. You can install the latest version of Synapse from +our repository or from buster-backports. Please +see the Debian documentation +for information on how to use backports.

+

If you are using Debian sid or testing, Synapse is available in the default +repositories and it should be possible to install it simply with:

+
sudo apt install matrix-synapse
+
+
Downstream Ubuntu packages
+

We do not recommend using the packages in the default Ubuntu repository +at this time, as they are old and suffer from known security vulnerabilities. +The latest version of Synapse can be installed from our repository.

+

Fedora

+

Synapse is in the Fedora repositories as matrix-synapse:

+
sudo dnf install matrix-synapse
+
+

Oleg Girko provides Fedora RPMs at +https://obs.infoserver.lv/project/monitor/matrix-synapse

+

OpenSUSE

+

Synapse is in the OpenSUSE repositories as matrix-synapse:

+
sudo zypper install matrix-synapse
+
+

SUSE Linux Enterprise Server

+

Unofficial package are built for SLES 15 in the openSUSE:Backports:SLE-15 repository at +https://download.opensuse.org/repositories/openSUSE:/Backports:/SLE-15/standard/

+

ArchLinux

+

The quickest way to get up and running with ArchLinux is probably with the community package +https://www.archlinux.org/packages/community/any/matrix-synapse/, which should pull in most of +the necessary dependencies.

+

pip may be outdated (6.0.7-1 and needs to be upgraded to 6.0.8-1 ):

+
sudo pip install --upgrade pip
+
+

If you encounter an error with lib bcrypt causing an Wrong ELF Class: +ELFCLASS32 (x64 Systems), you may need to reinstall py-bcrypt to correctly +compile it under the right architecture. (This should not be needed if +installing under virtualenv):

+
sudo pip uninstall py-bcrypt
+sudo pip install py-bcrypt
+
+

Void Linux

+

Synapse can be found in the void repositories as 'synapse':

+
xbps-install -Su
+xbps-install -S synapse
+
+

FreeBSD

+

Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Molloy from:

+
    +
  • Ports: cd /usr/ports/net-im/py-matrix-synapse && make install clean
  • +
  • Packages: pkg install py37-matrix-synapse
  • +
+

OpenBSD

+

As of OpenBSD 6.7 Synapse is available as a pre-compiled binary. The filesystem +underlying the homeserver directory (defaults to /var/synapse) has to be +mounted with wxallowed (cf. mount(8)), so creating a separate filesystem +and mounting it to /var/synapse should be taken into consideration.

+

Installing Synapse:

+
doas pkg_add synapse
+
+

NixOS

+

Robin Lambertz has packaged Synapse for NixOS at: +https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/misc/matrix-synapse.nix

+

Setting up Synapse

+

Once you have installed synapse as above, you will need to configure it.

+

Using PostgreSQL

+

By default Synapse uses an SQLite database and in doing so trades +performance for convenience. Almost all installations should opt to use PostgreSQL +instead. Advantages include:

+
    +
  • significant performance improvements due to the superior threading and +caching model, smarter query optimiser
  • +
  • allowing the DB to be run on separate hardware
  • +
+

For information on how to install and use PostgreSQL in Synapse, please see +docs/postgres.md

+

SQLite is only acceptable for testing purposes. SQLite should not be used in +a production server. Synapse will perform poorly when using +SQLite, especially when participating in large rooms.

+

TLS certificates

+

The default configuration exposes a single HTTP port on the local +interface: http://localhost:8008. It is suitable for local testing, +but for any practical use, you will need Synapse's APIs to be served +over HTTPS.

+

The recommended way to do so is to set up a reverse proxy on port +8448. You can find documentation on doing so in +docs/reverse_proxy.md.

+

Alternatively, you can configure Synapse to expose an HTTPS port. To do +so, you will need to edit homeserver.yaml, as follows:

+
    +
  • First, under the listeners section, uncomment the configuration for the +TLS-enabled listener. (Remove the hash sign (#) at the start of +each line). The relevant lines are like this:
  • +
+
  - port: 8448
+    type: http
+    tls: true
+    resources:
+      - names: [client, federation]
+
+
    +
  • +

    You will also need to uncomment the tls_certificate_path and +tls_private_key_path lines under the TLS section. You will need to manage +provisioning of these certificates yourself — Synapse had built-in ACME +support, but the ACMEv1 protocol Synapse implements is deprecated, not +allowed by LetsEncrypt for new sites, and will break for existing sites in +late 2020. See ACME.md.

    +

    If you are using your own certificate, be sure to use a .pem file that +includes the full certificate chain including any intermediate certificates +(for instance, if using certbot, use fullchain.pem as your certificate, not +cert.pem).

    +
  • +
+

For a more detailed guide to configuring your server for federation, see +federate.md.

+

Client Well-Known URI

+

Setting up the client Well-Known URI is optional but if you set it up, it will +allow users to enter their full username (e.g. @user:<server_name>) into clients +which support well-known lookup to automatically configure the homeserver and +identity server URLs. This is useful so that users don't have to memorize or think +about the actual homeserver URL you are using.

+

The URL https://<server_name>/.well-known/matrix/client should return JSON in +the following format.

+
{
+  "m.homeserver": {
+    "base_url": "https://<matrix.example.com>"
+  }
+}
+
+

It can optionally contain identity server information as well.

+
{
+  "m.homeserver": {
+    "base_url": "https://<matrix.example.com>"
+  },
+  "m.identity_server": {
+    "base_url": "https://<identity.example.com>"
+  }
+}
+
+

To work in browser based clients, the file must be served with the appropriate +Cross-Origin Resource Sharing (CORS) headers. A recommended value would be +Access-Control-Allow-Origin: * which would allow all browser based clients to +view it.

+

In nginx this would be something like:

+
location /.well-known/matrix/client {
+    return 200 '{"m.homeserver": {"base_url": "https://<matrix.example.com>"}}';
+    default_type application/json;
+    add_header Access-Control-Allow-Origin *;
+}
+
+

You should also ensure the public_baseurl option in homeserver.yaml is set +correctly. public_baseurl should be set to the URL that clients will use to +connect to your server. This is the same URL you put for the m.homeserver +base_url above.

+
public_baseurl: "https://<matrix.example.com>"
+
+

Email

+

It is desirable for Synapse to have the capability to send email. This allows +Synapse to send password reset emails, send verifications when an email address +is added to a user's account, and send email notifications to users when they +receive new messages.

+

To configure an SMTP server for Synapse, modify the configuration section +headed email, and be sure to have at least the smtp_host, smtp_port +and notif_from fields filled out. You may also need to set smtp_user, +smtp_pass, and require_transport_security.

+

If email is not configured, password reset, registration and notifications via +email will be disabled.

+

Registering a user

+

The easiest way to create a new user is to do so from a client like Element.

+

Alternatively, you can do so from the command line. This can be done as follows:

+
    +
  1. If synapse was installed via pip, activate the virtualenv as follows (if Synapse was +installed via a prebuilt package, register_new_matrix_user should already be +on the search path): +
    cd ~/synapse
    +source env/bin/activate
    +synctl start # if not already running
    +
    +
  2. +
  3. Run the following command: +
    register_new_matrix_user -c homeserver.yaml http://localhost:8008
    +
    +
  4. +
+

This will prompt you to add details for the new user, and will then connect to +the running Synapse to create the new user. For example:

+
New user localpart: erikj
+Password:
+Confirm password:
+Make admin [no]:
+Success!
+
+

This process uses a setting registration_shared_secret in +homeserver.yaml, which is shared between Synapse itself and the +register_new_matrix_user script. It doesn't matter what it is (a random +value is generated by --generate-config), but it should be kept secret, as +anyone with knowledge of it can register users, including admin accounts, +on your server even if enable_registration is false.

+

Setting up a TURN server

+

For reliable VoIP calls to be routed via this homeserver, you MUST configure +a TURN server. See docs/turn-howto.md for details.

+

URL previews

+

Synapse includes support for previewing URLs, which is disabled by default. To +turn it on you must enable the url_preview_enabled: True config parameter +and explicitly specify the IP ranges that Synapse is not allowed to spider for +previewing in the url_preview_ip_range_blacklist configuration parameter. +This is critical from a security perspective to stop arbitrary Matrix users +spidering 'internal' URLs on your network. At the very least we recommend that +your loopback and RFC1918 IP addresses are blacklisted.

+

This also requires the optional lxml python dependency to be installed. This +in turn requires the libxml2 library to be available - on Debian/Ubuntu this +means apt-get install libxml2-dev, or equivalent for your OS.

+

Troubleshooting Installation

+

pip seems to leak lots of memory during installation. For instance, a Linux +host with 512MB of RAM may run out of memory whilst installing Twisted. If this +happens, you will have to individually install the dependencies which are +failing, e.g.:

+
pip install twisted
+
+

If you have any other problems, feel free to ask in +#synapse:matrix.org.

+

Using Postgres

+

Synapse supports PostgreSQL versions 9.6 or later.

+

Install postgres client libraries

+

Synapse will require the python postgres client library in order to +connect to a postgres database.

+
    +
  • +

    If you are using the matrix.org debian/ubuntu +packages, the necessary python +library will already be installed, but you will need to ensure the +low-level postgres library is installed, which you can do with +apt install libpq5.

    +
  • +
  • +

    For other pre-built packages, please consult the documentation from +the relevant package.

    +
  • +
  • +

    If you installed synapse in a +virtualenv, you can install +the library with:

    +
    ~/synapse/env/bin/pip install "matrix-synapse[postgres]"
    +
    +

    (substituting the path to your virtualenv for ~/synapse/env, if +you used a different path). You will require the postgres +development files. These are in the libpq-dev package on +Debian-derived distributions.

    +
  • +
+

Set up database

+

Assuming your PostgreSQL database user is called postgres, first authenticate as the database user with:

+
su - postgres
+# Or, if your system uses sudo to get administrative rights
+sudo -u postgres bash
+
+

Then, create a postgres user and a database with:

+
# this will prompt for a password for the new user
+createuser --pwprompt synapse_user
+
+createdb --encoding=UTF8 --locale=C --template=template0 --owner=synapse_user synapse
+
+

The above will create a user called synapse_user, and a database called +synapse.

+

Note that the PostgreSQL database must have the correct encoding set +(as shown above), otherwise it will not be able to store UTF8 strings.

+

You may need to enable password authentication so synapse_user can +connect to the database. See +https://www.postgresql.org/docs/current/auth-pg-hba-conf.html.

+

Synapse config

+

When you are ready to start using PostgreSQL, edit the database +section in your config file to match the following lines:

+
database:
+  name: psycopg2
+  args:
+    user: <user>
+    password: <pass>
+    database: <db>
+    host: <host>
+    cp_min: 5
+    cp_max: 10
+
+

All key, values in args are passed to the psycopg2.connect(..) +function, except keys beginning with cp_, which are consumed by the +twisted adbapi connection pool. See the libpq +documentation +for a list of options which can be passed.

+

You should consider tuning the args.keepalives_* options if there is any danger of +the connection between your homeserver and database dropping, otherwise Synapse +may block for an extended period while it waits for a response from the +database server. Example values might be:

+
database:
+  args:
+    # ... as above
+
+    # seconds of inactivity after which TCP should send a keepalive message to the server
+    keepalives_idle: 10
+
+    # the number of seconds after which a TCP keepalive message that is not
+    # acknowledged by the server should be retransmitted
+    keepalives_interval: 10
+
+    # the number of TCP keepalives that can be lost before the client's connection
+    # to the server is considered dead
+    keepalives_count: 3
+
+

Tuning Postgres

+

The default settings should be fine for most deployments. For larger +scale deployments tuning some of the settings is recommended, details of +which can be found at +https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server.

+

In particular, we've found tuning the following values helpful for +performance:

+
    +
  • shared_buffers
  • +
  • effective_cache_size
  • +
  • work_mem
  • +
  • maintenance_work_mem
  • +
  • autovacuum_work_mem
  • +
+

Note that the appropriate values for those fields depend on the amount +of free memory the database host has available.

+

Porting from SQLite

+

Overview

+

The script synapse_port_db allows porting an existing synapse server +backed by SQLite to using PostgreSQL. This is done in as a two phase +process:

+
    +
  1. Copy the existing SQLite database to a separate location and run +the port script against that offline database.
  2. +
  3. Shut down the server. Rerun the port script to port any data that +has come in since taking the first snapshot. Restart server against +the PostgreSQL database.
  4. +
+

The port script is designed to be run repeatedly against newer snapshots +of the SQLite database file. This makes it safe to repeat step 1 if +there was a delay between taking the previous snapshot and being ready +to do step 2.

+

It is safe to at any time kill the port script and restart it.

+

Note that the database may take up significantly more (25% - 100% more) +space on disk after porting to Postgres.

+

Using the port script

+

Firstly, shut down the currently running synapse server and copy its +database file (typically homeserver.db) to another location. Once the +copy is complete, restart synapse. For instance:

+
./synctl stop
+cp homeserver.db homeserver.db.snapshot
+./synctl start
+
+

Copy the old config file into a new config file:

+
cp homeserver.yaml homeserver-postgres.yaml
+
+

Edit the database section as described in the section Synapse config +above and with the SQLite snapshot located at homeserver.db.snapshot +simply run:

+
synapse_port_db --sqlite-database homeserver.db.snapshot \
+    --postgres-config homeserver-postgres.yaml
+
+

The flag --curses displays a coloured curses progress UI.

+

If the script took a long time to complete, or time has otherwise passed +since the original snapshot was taken, repeat the previous steps with a +newer snapshot.

+

To complete the conversion shut down the synapse server and run the port +script one last time, e.g. if the SQLite database is at homeserver.db +run:

+
synapse_port_db --sqlite-database homeserver.db \
+    --postgres-config homeserver-postgres.yaml
+
+

Once that has completed, change the synapse config to point at the +PostgreSQL database configuration file homeserver-postgres.yaml:

+
./synctl stop
+mv homeserver.yaml homeserver-old-sqlite.yaml
+mv homeserver-postgres.yaml homeserver.yaml
+./synctl start
+
+

Synapse should now be running against PostgreSQL.

+

Troubleshooting

+

Alternative auth methods

+

If you get an error along the lines of FATAL: Ident authentication failed for user "synapse_user", you may need to use an authentication method other than +ident:

+
    +
  • +

    If the synapse_user user has a password, add the password to the database: +section of homeserver.yaml. Then add the following to pg_hba.conf:

    +
    host    synapse     synapse_user    ::1/128     md5  # or `scram-sha-256` instead of `md5` if you use that
    +
    +
  • +
  • +

    If the synapse_user user does not have a password, then a password doesn't +have to be added to homeserver.yaml. But the following does need to be added +to pg_hba.conf:

    +
    host    synapse     synapse_user    ::1/128     trust
    +
    +
  • +
+

Note that line order matters in pg_hba.conf, so make sure that if you do add a +new line, it is inserted before:

+
host    all         all             ::1/128     ident
+
+

Fixing incorrect COLLATE or CTYPE

+

Synapse will refuse to set up a new database if it has the wrong values of +COLLATE and CTYPE set, and will log warnings on existing databases. Using +different locales can cause issues if the locale library is updated from +underneath the database, or if a different version of the locale is used on any +replicas.

+

The safest way to fix the issue is to dump the database and recreate it with +the correct locale parameter (as shown above). It is also possible to change the +parameters on a live database and run a REINDEX on the entire database, +however extreme care must be taken to avoid database corruption.

+

Note that the above may fail with an error about duplicate rows if corruption +has already occurred, and such duplicate rows will need to be manually removed.

+

Fixing inconsistent sequences error

+

Synapse uses Postgres sequences to generate IDs for various tables. A sequence +and associated table can get out of sync if, for example, Synapse has been +downgraded and then upgraded again.

+

To fix the issue shut down Synapse (including any and all workers) and run the +SQL command included in the error message. Once done Synapse should start +successfully.

+

Using a reverse proxy with Synapse

+

It is recommended to put a reverse proxy such as +nginx, +Apache, +Caddy, +HAProxy or +relayd in front of Synapse. One advantage +of doing so is that it means that you can expose the default https port +(443) to Matrix clients without needing to run Synapse with root +privileges.

+

You should configure your reverse proxy to forward requests to /_matrix or +/_synapse/client to Synapse, and have it set the X-Forwarded-For and +X-Forwarded-Proto request headers.

+

You should remember that Matrix clients and other Matrix servers do not +necessarily need to connect to your server via the same server name or +port. Indeed, clients will use port 443 by default, whereas servers default to +port 8448. Where these are different, we refer to the 'client port' and the +'federation port'. See the Matrix +specification +for more details of the algorithm used for federation connections, and +delegate.md for instructions on setting up delegation.

+

NOTE: Your reverse proxy must not canonicalise or normalise +the requested URI in any way (for example, by decoding %xx escapes). +Beware that Apache will canonicalise URIs unless you specify +nocanon.

+

Let's assume that we expect clients to connect to our server at +https://matrix.example.com, and other servers to connect at +https://example.com:8448. The following sections detail the configuration of +the reverse proxy and the homeserver.

+

Reverse-proxy configuration examples

+

NOTE: You only need one of these.

+

nginx

+
server {
+    listen 443 ssl http2;
+    listen [::]:443 ssl http2;
+
+    # For the federation port
+    listen 8448 ssl http2 default_server;
+    listen [::]:8448 ssl http2 default_server;
+
+    server_name matrix.example.com;
+
+    location ~* ^(\/_matrix|\/_synapse\/client) {
+        proxy_pass http://localhost:8008;
+        proxy_set_header X-Forwarded-For $remote_addr;
+        proxy_set_header X-Forwarded-Proto $scheme;
+        proxy_set_header Host $host;
+
+        # Nginx by default only allows file uploads up to 1M in size
+        # Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
+        client_max_body_size 50M;
+    }
+}
+
+

NOTE: Do not add a path after the port in proxy_pass, otherwise nginx will +canonicalise/normalise the URI.

+

Caddy 1

+
matrix.example.com {
+  proxy /_matrix http://localhost:8008 {
+    transparent
+  }
+
+  proxy /_synapse/client http://localhost:8008 {
+    transparent
+  }
+}
+
+example.com:8448 {
+  proxy / http://localhost:8008 {
+    transparent
+  }
+}
+
+

Caddy 2

+
matrix.example.com {
+  reverse_proxy /_matrix/* http://localhost:8008
+  reverse_proxy /_synapse/client/* http://localhost:8008
+}
+
+example.com:8448 {
+  reverse_proxy http://localhost:8008
+}
+
+

Apache

+
<VirtualHost *:443>
+    SSLEngine on
+    ServerName matrix.example.com
+
+    RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
+    AllowEncodedSlashes NoDecode
+    ProxyPreserveHost on
+    ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
+    ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
+    ProxyPass /_synapse/client http://127.0.0.1:8008/_synapse/client nocanon
+    ProxyPassReverse /_synapse/client http://127.0.0.1:8008/_synapse/client
+</VirtualHost>
+
+<VirtualHost *:8448>
+    SSLEngine on
+    ServerName example.com
+
+    RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
+    AllowEncodedSlashes NoDecode
+    ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
+    ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
+</VirtualHost>
+
+

NOTE: ensure the nocanon options are included.

+

NOTE 2: It appears that Synapse is currently incompatible with the ModSecurity module for Apache (mod_security2). If you need it enabled for other services on your web server, you can disable it for Synapse's two VirtualHosts by including the following lines before each of the two </VirtualHost> above:

+
<IfModule security2_module>
+    SecRuleEngine off
+</IfModule>
+
+

NOTE 3: Missing ProxyPreserveHost on can lead to a redirect loop.

+

HAProxy

+
frontend https
+  bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
+  http-request set-header X-Forwarded-Proto https if { ssl_fc }
+  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
+  http-request set-header X-Forwarded-For %[src]
+
+  # Matrix client traffic
+  acl matrix-host hdr(host) -i matrix.example.com
+  acl matrix-path path_beg /_matrix
+  acl matrix-path path_beg /_synapse/client
+
+  use_backend matrix if matrix-host matrix-path
+
+frontend matrix-federation
+  bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
+  http-request set-header X-Forwarded-Proto https if { ssl_fc }
+  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
+  http-request set-header X-Forwarded-For %[src]
+
+  default_backend matrix
+
+backend matrix
+  server matrix 127.0.0.1:8008
+
+

Relayd

+
table <webserver>    { 127.0.0.1 }
+table <matrixserver> { 127.0.0.1 }
+
+http protocol "https" {
+    tls { no tlsv1.0, ciphers "HIGH" }
+    tls keypair "example.com"
+    match header set "X-Forwarded-For"   value "$REMOTE_ADDR"
+    match header set "X-Forwarded-Proto" value "https"
+
+    # set CORS header for .well-known/matrix/server, .well-known/matrix/client
+    # httpd does not support setting headers, so do it here
+    match request path "/.well-known/matrix/*" tag "matrix-cors"
+    match response tagged "matrix-cors" header set "Access-Control-Allow-Origin" value "*"
+
+    pass quick path "/_matrix/*"         forward to <matrixserver>
+    pass quick path "/_synapse/client/*" forward to <matrixserver>
+
+    # pass on non-matrix traffic to webserver
+    pass                                 forward to <webserver>
+}
+
+relay "https_traffic" {
+    listen on egress port 443 tls
+    protocol "https"
+    forward to <matrixserver> port 8008 check tcp
+    forward to <webserver>    port 8080 check tcp
+}
+
+http protocol "matrix" {
+    tls { no tlsv1.0, ciphers "HIGH" }
+    tls keypair "example.com"
+    block
+    pass quick path "/_matrix/*"         forward to <matrixserver>
+    pass quick path "/_synapse/client/*" forward to <matrixserver>
+}
+
+relay "matrix_federation" {
+    listen on egress port 8448 tls
+    protocol "matrix"
+    forward to <matrixserver> port 8008 check tcp
+}
+
+

Homeserver Configuration

+

You will also want to set bind_addresses: ['127.0.0.1'] and +x_forwarded: true for port 8008 in homeserver.yaml to ensure that +client IP addresses are recorded correctly.

+

Having done so, you can then use https://matrix.example.com (instead +of https://matrix.example.com:8448) as the "Custom server" when +connecting to Synapse from a client.

+

Health check endpoint

+

Synapse exposes a health check endpoint for use by reverse proxies. +Each configured HTTP listener has a /health endpoint which always returns +200 OK (and doesn't get logged).

+

Synapse administration endpoints

+

Endpoints for administering your Synapse instance are placed under +/_synapse/admin. These require authentication through an access token of an +admin user. However as access to these endpoints grants the caller a lot of power, +we do not recommend exposing them to the public internet without good reason.

+

Overview

+

This document explains how to enable VoIP relaying on your Home Server with +TURN.

+

The synapse Matrix Home Server supports integration with TURN server via the +TURN server REST API. This +allows the Home Server to generate credentials that are valid for use on the +TURN server through the use of a secret shared between the Home Server and the +TURN server.

+

The following sections describe how to install coturn (which implements the TURN REST API) and integrate it with synapse.

+

Requirements

+

For TURN relaying with coturn to work, it must be hosted on a server/endpoint with a public IP.

+

Hosting TURN behind a NAT (even with appropriate port forwarding) is known to cause issues +and to often not work.

+

coturn setup

+

Initial installation

+

The TURN daemon coturn is available from a variety of sources such as native package managers, or installation from source.

+

Debian installation

+

Just install the debian package:

+
apt install coturn
+
+

This will install and start a systemd service called coturn.

+

Source installation

+
    +
  1. +

    Download the latest release from github. Unpack it and cd into the directory.

    +
  2. +
  3. +

    Configure it:

    +
    ./configure
    +
    +

    You may need to install libevent2: if so, you should do so in +the way recommended by your operating system. You can ignore +warnings about lack of database support: a database is unnecessary +for this purpose.

    +
  4. +
  5. +

    Build and install it:

    +
    make
    +make install
    +
    +
  6. +
+

Configuration

+
    +
  1. +

    Create or edit the config file in /etc/turnserver.conf. The relevant +lines, with example values, are:

    +
    use-auth-secret
    +static-auth-secret=[your secret key here]
    +realm=turn.myserver.org
    +
    +

    See turnserver.conf for explanations of the options. One way to generate +the static-auth-secret is with pwgen:

    +
    pwgen -s 64 1
    +
    +

    A realm must be specified, but its value is somewhat arbitrary. (It is +sent to clients as part of the authentication flow.) It is conventional to +set it to be your server name.

    +
  2. +
  3. +

    You will most likely want to configure coturn to write logs somewhere. The +easiest way is normally to send them to the syslog:

    +
    syslog
    +
    +

    (in which case, the logs will be available via journalctl -u coturn on a +systemd system). Alternatively, coturn can be configured to write to a +logfile - check the example config file supplied with coturn.

    +
  4. +
  5. +

    Consider your security settings. TURN lets users request a relay which will +connect to arbitrary IP addresses and ports. The following configuration is +suggested as a minimum starting point:

    +
    # VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
    +no-tcp-relay
    +
    +# don't let the relay ever try to connect to private IP address ranges within your network (if any)
    +# given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
    +denied-peer-ip=10.0.0.0-10.255.255.255
    +denied-peer-ip=192.168.0.0-192.168.255.255
    +denied-peer-ip=172.16.0.0-172.31.255.255
    +
    +# special case the turn server itself so that client->TURN->TURN->client flows work
    +allowed-peer-ip=10.0.0.1
    +
    +# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
    +user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
    +total-quota=1200
    +
    +
  6. +
  7. +

    Also consider supporting TLS/DTLS. To do this, add the following settings +to turnserver.conf:

    +
    # TLS certificates, including intermediate certs.
    +# For Let's Encrypt certificates, use `fullchain.pem` here.
    +cert=/path/to/fullchain.pem
    +
    +# TLS private key file
    +pkey=/path/to/privkey.pem
    +
    +

    In this case, replace the turn: schemes in the turn_uri settings below +with turns:.

    +

    We recommend that you only try to set up TLS/DTLS once you have set up a +basic installation and got it working.

    +
  8. +
  9. +

    Ensure your firewall allows traffic into the TURN server on the ports +you've configured it to listen on (By default: 3478 and 5349 for TURN +traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535 +for the UDP relay.)

    +
  10. +
  11. +

    We do not recommend running a TURN server behind NAT, and are not aware of +anyone doing so successfully.

    +

    If you want to try it anyway, you will at least need to tell coturn its +external IP address:

    +
    external-ip=192.88.99.1
    +
    +

    ... and your NAT gateway must forward all of the relayed ports directly +(eg, port 56789 on the external IP must be always be forwarded to port +56789 on the internal IP).

    +

    If you get this working, let us know!

    +
  12. +
  13. +

    (Re)start the turn server:

    +
      +
    • +

      If you used the Debian package (or have set up a systemd unit yourself):

      +
      systemctl restart coturn
      +
      +
    • +
    • +

      If you installed from source:

      +
      bin/turnserver -o
      +
      +
    • +
    +
  14. +
+

Synapse setup

+

Your home server configuration file needs the following extra keys:

+
    +
  1. "turn_uris": This needs to be a yaml list of public-facing URIs +for your TURN server to be given out to your clients. Add separate +entries for each transport your TURN server supports.
  2. +
  3. "turn_shared_secret": This is the secret shared between your +Home server and your TURN server, so you should set it to the same +string you used in turnserver.conf.
  4. +
  5. "turn_user_lifetime": This is the amount of time credentials +generated by your Home Server are valid for (in milliseconds). +Shorter times offer less potential for abuse at the expense of +increased traffic between web clients and your home server to +refresh credentials. The TURN REST API specification recommends +one day (86400000).
  6. +
  7. "turn_allow_guests": Whether to allow guest users to use the +TURN server. This is enabled by default, as otherwise VoIP will +not work reliably for guests. However, it does introduce a +security risk as it lets guests connect to arbitrary endpoints +without having gone through a CAPTCHA or similar to register a +real account.
  8. +
+

As an example, here is the relevant section of the config file for matrix.org. The +turn_uris are appropriate for TURN servers listening on the default ports, with no TLS.

+
turn_uris: [ "turn:turn.matrix.org?transport=udp", "turn:turn.matrix.org?transport=tcp" ]
+turn_shared_secret: "n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons"
+turn_user_lifetime: 86400000
+turn_allow_guests: True
+
+

After updating the homeserver configuration, you must restart synapse:

+
    +
  • If you use synctl: +
    cd /where/you/run/synapse
    +./synctl restart
    +
    +
  • +
  • If you use systemd: +
    systemctl restart matrix-synapse.service
    +
    +
  • +
+

... and then reload any clients (or wait an hour for them to refresh their +settings).

+

Troubleshooting

+

The normal symptoms of a misconfigured TURN server are that calls between +devices on different networks ring, but get stuck at "call +connecting". Unfortunately, troubleshooting this can be tricky.

+

Here are a few things to try:

+
    +
  • +

    Check that your TURN server is not behind NAT. As above, we're not aware of +anyone who has successfully set this up.

    +
  • +
  • +

    Check that you have opened your firewall to allow TCP and UDP traffic to the +TURN ports (normally 3478 and 5479).

    +
  • +
  • +

    Check that you have opened your firewall to allow UDP traffic to the UDP +relay ports (49152-65535 by default).

    +
  • +
  • +

    Some WebRTC implementations (notably, that of Google Chrome) appear to get +confused by TURN servers which are reachable over IPv6 (this appears to be +an unexpected side-effect of its handling of multiple IP addresses as +defined by +draft-ietf-rtcweb-ip-handling).

    +

    Try removing any AAAA records for your TURN server, so that it is only +reachable over IPv4.

    +
  • +
  • +

    Enable more verbose logging in coturn via the verbose setting:

    +
    verbose
    +
    +

    ... and then see if there are any clues in its logs.

    +
  • +
  • +

    If you are using a browser-based client under Chrome, check +chrome://webrtc-internals/ for insights into the internals of the +negotiation. On Firefox, check the "Connection Log" on about:webrtc.

    +

    (Understanding the output is beyond the scope of this document!)

    +
  • +
  • +

    You can test your Matrix homeserver TURN setup with https://test.voip.librepush.net/. +Note that this test is not fully reliable yet, so don't be discouraged if +the test fails. +Here is the github repo of the +source of the tester, where you can file bug reports.

    +
  • +
  • +

    There is a WebRTC test tool at +https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/. To +use it, you will need a username/password for your TURN server. You can +either:

    +
      +
    • +

      look for the GET /_matrix/client/r0/voip/turnServer request made by a +matrix client to your homeserver in your browser's network inspector. In +the response you should see username and password. Or:

      +
    • +
    • +

      Use the following shell commands:

      +
      secret=staticAuthSecretHere
      +
      +u=$((`date +%s` + 3600)):test
      +p=$(echo -n $u | openssl dgst -hmac $secret -sha1 -binary | base64)
      +echo -e "username: $u\npassword: $p"
      +
      +

      Or:

      +
    • +
    • +

      Temporarily configure coturn to accept a static username/password. To do +this, comment out use-auth-secret and static-auth-secret and add the +following:

      +
      lt-cred-mech
      +user=username:password
      +
      +

      Note: these settings will not take effect unless use-auth-secret +and static-auth-secret are disabled.

      +

      Restart coturn after changing the configuration file.

      +

      Remember to restore the original settings to go back to testing with +Matrix clients!

      +
    • +
    +

    If the TURN server is working correctly, you should see at least one relay +entry in the results.

    +
  • +
+

Delegation

+

By default, other homeservers will expect to be able to reach yours via +your server_name, on port 8448. For example, if you set your server_name +to example.com (so that your user names look like @user:example.com), +other servers will try to connect to yours at https://example.com:8448/.

+

Delegation is a Matrix feature allowing a homeserver admin to retain a +server_name of example.com so that user IDs, room aliases, etc continue +to look like *:example.com, whilst having federation traffic routed +to a different server and/or port (e.g. synapse.example.com:443).

+

.well-known delegation

+

To use this method, you need to be able to alter the +server_name 's https server to serve the /.well-known/matrix/server +URL. Having an active server (with a valid TLS certificate) serving your +server_name domain is out of the scope of this documentation.

+

The URL https://<server_name>/.well-known/matrix/server should +return a JSON structure containing the key m.server like so:

+
{
+    "m.server": "<synapse.server.name>[:<yourport>]"
+}
+
+

In our example, this would mean that URL https://example.com/.well-known/matrix/server +should return:

+
{
+    "m.server": "synapse.example.com:443"
+}
+
+

Note, specifying a port is optional. If no port is specified, then it defaults +to 8448.

+

With .well-known delegation, federating servers will check for a valid TLS +certificate for the delegated hostname (in our example: synapse.example.com).

+

SRV DNS record delegation

+

It is also possible to do delegation using a SRV DNS record. However, that is +considered an advanced topic since it's a bit complex to set up, and .well-known +delegation is already enough in most cases.

+

However, if you really need it, you can find some documentation on how such a +record should look like and how Synapse will use it in the Matrix +specification.

+

Delegation FAQ

+

When do I need delegation?

+

If your homeserver's APIs are accessible on the default federation port (8448) +and the domain your server_name points to, you do not need any delegation.

+

For instance, if you registered example.com and pointed its DNS A record at a +fresh server, you could install Synapse on that host, giving it a server_name +of example.com, and once a reverse proxy has been set up to proxy all requests +sent to the port 8448 and serve TLS certificates for example.com, you +wouldn't need any delegation set up.

+

However, if your homeserver's APIs aren't accessible on port 8448 and on the +domain server_name points to, you will need to let other servers know how to +find it using delegation.

+

Do you still recommend against using a reverse proxy on the federation port?

+

We no longer actively recommend against using a reverse proxy. Many admins will +find it easier to direct federation traffic to a reverse proxy and manage their +own TLS certificates, and this is a supported configuration.

+

See reverse_proxy.md for information on setting up a +reverse proxy.

+

Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?

+

This is no longer necessary. If you are using a reverse proxy for all of your +TLS traffic, then you can set no_tls: True in the Synapse config.

+

In that case, the only reason Synapse needs the certificate is to populate a legacy +tls_fingerprints field in the federation API. This is ignored by Synapse 0.99.0 +and later, and the only time pre-0.99 Synapses will check it is when attempting to +fetch the server keys - and generally this is delegated via matrix.org, which +is running a modern version of Synapse.

+

Do I need the same certificate for the client and federation port?

+

No. There is nothing stopping you from using different certificates, +particularly if you are using a reverse proxy.

+
+

Upgrading Synapse

+

Before upgrading check if any special steps are required to upgrade from the +version you currently have installed to the current version of Synapse. The extra +instructions that may be required are listed later in this document.

+
    +
  • +

    Check that your versions of Python and PostgreSQL are still supported.

    +

    Synapse follows upstream lifecycles for Python_ and PostgreSQL_, and +removes support for versions which are no longer maintained.

    +

    The website https://endoflife.date also offers convenient summaries.

    +

    .. _Python: https://devguide.python.org/devcycle/#end-of-life-branches +.. _PostgreSQL: https://www.postgresql.org/support/versioning/

    +
  • +
  • +

    If Synapse was installed using prebuilt packages <INSTALL.md#prebuilt-packages>_, you will need to follow the normal process +for upgrading those packages.

    +
  • +
  • +

    If Synapse was installed from source, then:

    +
      +
    1. +

      Activate the virtualenv before upgrading. For example, if Synapse is +installed in a virtualenv in ~/synapse/env then run:

      +

      .. code:: bash

      +

      source ~/synapse/env/bin/activate

      +
    2. +
    3. +

      If Synapse was installed using pip then upgrade to the latest version by +running:

      +

      .. code:: bash

      +

      pip install --upgrade matrix-synapse

      +

      If Synapse was installed using git then upgrade to the latest version by +running:

      +

      .. code:: bash

      +

      git pull +pip install --upgrade .

      +
    4. +
    5. +

      Restart Synapse:

      +

      .. code:: bash

      +

      ./synctl restart

      +
    6. +
    +
  • +
+

To check whether your update was successful, you can check the running server +version with:

+

.. code:: bash

+
# you may need to replace 'localhost:8008' if synapse is not configured
+# to listen on port 8008.
+
+curl http://localhost:8008/_synapse/admin/v1/server_version
+
+

Rolling back to older versions

+

Rolling back to previous releases can be difficult, due to database schema +changes between releases. Where we have been able to test the rollback process, +this will be noted below.

+

In general, you will need to undo any changes made during the upgrade process, +for example:

+
    +
  • +

    pip:

    +

    .. code:: bash

    +

    source env/bin/activate

    +

    replace 1.3.0 accordingly:

    +

    pip install matrix-synapse==1.3.0

    +
  • +
  • +

    Debian:

    +

    .. code:: bash

    +

    replace 1.3.0 and stretch accordingly:

    +

    wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb +dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb

    +
  • +
+

Upgrading to v1.34.0

+

room_invite_state_types configuration setting

+

The room_invite_state_types configuration setting has been deprecated and +replaced with room_prejoin_state. See the sample configuration file <https://github.com/matrix-org/synapse/blob/v1.34.0/docs/sample_config.yaml#L1515>_.

+

If you have set room_invite_state_types to the default value you should simply +remove it from your configuration file. The default value used to be:

+

.. code:: yaml

+

room_invite_state_types: +- "m.room.join_rules" +- "m.room.canonical_alias" +- "m.room.avatar" +- "m.room.encryption" +- "m.room.name"

+

If you have customised this value, you should remove room_invite_state_types and +configure room_prejoin_state instead.

+

Upgrading to v1.33.0

+

Account Validity HTML templates can now display a user's expiration date

+

This may affect you if you have enabled the account validity feature, and have made use of a +custom HTML template specified by the account_validity.template_dir or account_validity.account_renewed_html_path +Synapse config options.

+

The template can now accept an expiration_ts variable, which represents the unix timestamp in milliseconds for the +future date of which their account has been renewed until. See the +default template <https://github.com/matrix-org/synapse/blob/release-v1.33.0/synapse/res/templates/account_renewed.html>_ +for an example of usage.

+

ALso note that a new HTML template, account_previously_renewed.html, has been added. This is is shown to users +when they attempt to renew their account with a valid renewal token that has already been used before. The default +template contents can been found +here <https://github.com/matrix-org/synapse/blob/release-v1.33.0/synapse/res/templates/account_previously_renewed.html>_, +and can also accept an expiration_ts variable. This template replaces the error message users would previously see +upon attempting to use a valid renewal token more than once.

+

Upgrading to v1.32.0

+

Regression causing connected Prometheus instances to become overwhelmed

+

This release introduces a regression <https://github.com/matrix-org/synapse/issues/9853>_ +that can overwhelm connected Prometheus instances. This issue is not present in +Synapse v1.32.0rc1.

+

If you have been affected, please downgrade to 1.31.0. You then may need to +remove excess writeahead logs in order for Prometheus to recover. Instructions +for doing so are provided +here <https://github.com/matrix-org/synapse/pull/9854#issuecomment-823472183>_.

+

Dropping support for old Python, Postgres and SQLite versions

+

In line with our deprecation policy <https://github.com/matrix-org/synapse/blob/release-v1.32.0/docs/deprecation_policy.md>_, +we've dropped support for Python 3.5 and PostgreSQL 9.5, as they are no longer supported upstream.

+

This release of Synapse requires Python 3.6+ and PostgresSQL 9.6+ or SQLite 3.22+.

+

Removal of old List Accounts Admin API

+

The deprecated v1 "list accounts" admin API (GET /_synapse/admin/v1/users/<user_id>) has been removed in this version.

+

The v2 list accounts API <https://github.com/matrix-org/synapse/blob/master/docs/admin_api/user_admin_api.rst#list-accounts>_ +has been available since Synapse 1.7.0 (2019-12-13), and is accessible under GET /_synapse/admin/v2/users.

+

The deprecation of the old endpoint was announced with Synapse 1.28.0 (released on 2021-02-25).

+

Application Services must use type m.login.application_service when registering users

+

In compliance with the +Application Service spec <https://matrix.org/docs/spec/application_service/r0.1.2#server-admin-style-permissions>_, +Application Services are now required to use the m.login.application_service type when registering users via the +/_matrix/client/r0/register endpoint. This behaviour was deprecated in Synapse v1.30.0.

+

Please ensure your Application Services are up to date.

+

Upgrading to v1.29.0

+

Requirement for X-Forwarded-Proto header

+

When using Synapse with a reverse proxy (in particular, when using the +x_forwarded option on an HTTP listener), Synapse now expects to receive an +X-Forwarded-Proto header on incoming HTTP requests. If it is not set, Synapse +will log a warning on each received request.

+

To avoid the warning, administrators using a reverse proxy should ensure that +the reverse proxy sets X-Forwarded-Proto header to https or http to +indicate the protocol used by the client.

+

Synapse also requires the Host header to be preserved.

+

See the reverse proxy documentation <docs/reverse_proxy.md>_, where the +example configurations have been updated to show how to set these headers.

+

(Users of Caddy <https://caddyserver.com/>_ are unaffected, since we believe it +sets X-Forwarded-Proto by default.)

+

Upgrading to v1.27.0

+

Changes to callback URI for OAuth2 / OpenID Connect and SAML2

+

This version changes the URI used for callbacks from OAuth2 and SAML2 identity providers:

+
    +
  • +

    If your server is configured for single sign-on via an OpenID Connect or OAuth2 identity +provider, you will need to add [synapse public baseurl]/_synapse/client/oidc/callback +to the list of permitted "redirect URIs" at the identity provider.

    +

    See docs/openid.md <docs/openid.md>_ for more information on setting up OpenID +Connect.

    +
  • +
  • +

    If your server is configured for single sign-on via a SAML2 identity provider, you will +need to add [synapse public baseurl]/_synapse/client/saml2/authn_response as a permitted +"ACS location" (also known as "allowed callback URLs") at the identity provider.

    +

    The "Issuer" in the "AuthnRequest" to the SAML2 identity provider is also updated to +[synapse public baseurl]/_synapse/client/saml2/metadata.xml. If your SAML2 identity +provider uses this property to validate or otherwise identify Synapse, its configuration +will need to be updated to use the new URL. Alternatively you could create a new, separate +"EntityDescriptor" in your SAML2 identity provider with the new URLs and leave the URLs in +the existing "EntityDescriptor" as they were.

    +
  • +
+

Changes to HTML templates

+

The HTML templates for SSO and email notifications now have Jinja2's autoescape <https://jinja.palletsprojects.com/en/2.11.x/api/#autoescaping>_ +enabled for files ending in .html, .htm, and .xml. If you have customised +these templates and see issues when viewing them you might need to update them. +It is expected that most configurations will need no changes.

+

If you have customised the templates names for these templates, it is recommended +to verify they end in .html to ensure autoescape is enabled.

+

The above applies to the following templates:

+
    +
  • add_threepid.html
  • +
  • add_threepid_failure.html
  • +
  • add_threepid_success.html
  • +
  • notice_expiry.html
  • +
  • notice_expiry.html
  • +
  • notif_mail.html (which, by default, includes room.html and notif.html)
  • +
  • password_reset.html
  • +
  • password_reset_confirmation.html
  • +
  • password_reset_failure.html
  • +
  • password_reset_success.html
  • +
  • registration.html
  • +
  • registration_failure.html
  • +
  • registration_success.html
  • +
  • sso_account_deactivated.html
  • +
  • sso_auth_bad_user.html
  • +
  • sso_auth_confirm.html
  • +
  • sso_auth_success.html
  • +
  • sso_error.html
  • +
  • sso_login_idp_picker.html
  • +
  • sso_redirect_confirm.html
  • +
+

Upgrading to v1.26.0

+

Rolling back to v1.25.0 after a failed upgrade

+

v1.26.0 includes a lot of large changes. If something problematic occurs, you +may want to roll-back to a previous version of Synapse. Because v1.26.0 also +includes a new database schema version, reverting that version is also required +alongside the generic rollback instructions mentioned above. In short, to roll +back to v1.25.0 you need to:

+
    +
  1. +

    Stop the server

    +
  2. +
  3. +

    Decrease the schema version in the database:

    +

    .. code:: sql

    +

    UPDATE schema_version SET version = 58;

    +
  4. +
  5. +

    Delete the ignored users & chain cover data:

    +

    .. code:: sql

    +

    DROP TABLE IF EXISTS ignored_users; +UPDATE rooms SET has_auth_chain_index = false;

    +

    For PostgreSQL run:

    +

    .. code:: sql

    +

    TRUNCATE event_auth_chain_links; +TRUNCATE event_auth_chains;

    +

    For SQLite run:

    +

    .. code:: sql

    +

    DELETE FROM event_auth_chain_links; +DELETE FROM event_auth_chains;

    +
  6. +
  7. +

    Mark the deltas as not run (so they will re-run on upgrade).

    +

    .. code:: sql

    +

    DELETE FROM applied_schema_deltas WHERE version = 59 AND file = "59/01ignored_user.py"; +DELETE FROM applied_schema_deltas WHERE version = 59 AND file = "59/06chain_cover_index.sql";

    +
  8. +
  9. +

    Downgrade Synapse by following the instructions for your installation method +in the "Rolling back to older versions" section above.

    +
  10. +
+

Upgrading to v1.25.0

+

Last release supporting Python 3.5

+

This is the last release of Synapse which guarantees support with Python 3.5, +which passed its upstream End of Life date several months ago.

+

We will attempt to maintain support through March 2021, but without guarantees.

+

In the future, Synapse will follow upstream schedules for ending support of +older versions of Python and PostgreSQL. Please upgrade to at least Python 3.6 +and PostgreSQL 9.6 as soon as possible.

+

Blacklisting IP ranges

+

Synapse v1.25.0 includes new settings, ip_range_blacklist and +ip_range_whitelist, for controlling outgoing requests from Synapse for federation, +identity servers, push, and for checking key validity for third-party invite events. +The previous setting, federation_ip_range_blacklist, is deprecated. The new +ip_range_blacklist defaults to private IP ranges if it is not defined.

+

If you have never customised federation_ip_range_blacklist it is recommended +that you remove that setting.

+

If you have customised federation_ip_range_blacklist you should update the +setting name to ip_range_blacklist.

+

If you have a custom push server that is reached via private IP space you may +need to customise ip_range_blacklist or ip_range_whitelist.

+

Upgrading to v1.24.0

+

Custom OpenID Connect mapping provider breaking change

+

This release allows the OpenID Connect mapping provider to perform normalisation +of the localpart of the Matrix ID. This allows for the mapping provider to +specify different algorithms, instead of the default way.

+

If your Synapse configuration uses a custom mapping provider +(oidc_config.user_mapping_provider.module is specified and not equal to +synapse.handlers.oidc_handler.JinjaOidcMappingProvider) then you must ensure +that map_user_attributes of the mapping provider performs some normalisation +of the localpart returned. To match previous behaviour you can use the +map_username_to_mxid_localpart function provided by Synapse. An example is +shown below:

+

.. code-block:: python

+

from synapse.types import map_username_to_mxid_localpart

+

class MyMappingProvider: +def map_user_attributes(self, userinfo, token): +# ... your custom logic ... +sso_user_id = ... +localpart = map_username_to_mxid_localpart(sso_user_id)

+
      return {"localpart": localpart}
+
+

Removal historical Synapse Admin API

+

Historically, the Synapse Admin API has been accessible under:

+
    +
  • /_matrix/client/api/v1/admin
  • +
  • /_matrix/client/unstable/admin
  • +
  • /_matrix/client/r0/admin
  • +
  • /_synapse/admin/v1
  • +
+

The endpoints with /_matrix/client/* prefixes have been removed as of v1.24.0. +The Admin API is now only accessible under:

+
    +
  • /_synapse/admin/v1
  • +
+

The only exception is the /admin/whois endpoint, which is +also available via the client-server API <https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-admin-whois-userid>_.

+

The deprecation of the old endpoints was announced with Synapse 1.20.0 (released +on 2020-09-22) and makes it easier for homeserver admins to lock down external +access to the Admin API endpoints.

+

Upgrading to v1.23.0

+

Structured logging configuration breaking changes

+

This release deprecates use of the structured: true logging configuration for +structured logging. If your logging configuration contains structured: true +then it should be modified based on the structured logging documentation <https://github.com/matrix-org/synapse/blob/master/docs/structured_logging.md>_.

+

The structured and drains logging options are now deprecated and should +be replaced by standard logging configuration of handlers and formatters.

+

A future will release of Synapse will make using structured: true an error.

+

Upgrading to v1.22.0

+

ThirdPartyEventRules breaking changes

+

This release introduces a backwards-incompatible change to modules making use of +ThirdPartyEventRules in Synapse. If you make use of a module defined under the +third_party_event_rules config option, please make sure it is updated to handle +the below change:

+

The http_client argument is no longer passed to modules as they are initialised. Instead, +modules are expected to make use of the http_client property on the ModuleApi class. +Modules are now passed a module_api argument during initialisation, which is an instance of +ModuleApi. ModuleApi instances have a http_client property which acts the same as +the http_client argument previously passed to ThirdPartyEventRules modules.

+

Upgrading to v1.21.0

+

Forwarding /_synapse/client through your reverse proxy

+

The reverse proxy documentation <https://github.com/matrix-org/synapse/blob/develop/docs/reverse_proxy.md>_ has been updated +to include reverse proxy directives for /_synapse/client/* endpoints. As the user password +reset flow now uses endpoints under this prefix, you must update your reverse proxy +configurations for user password reset to work.

+

Additionally, note that the Synapse worker documentation <https://github.com/matrix-org/synapse/blob/develop/docs/workers.md>_ has been updated to +state that the /_synapse/client/password_reset/email/submit_token endpoint can be handled +by all workers. If you make use of Synapse's worker feature, please update your reverse proxy +configuration to reflect this change.

+

New HTML templates

+

A new HTML template, +password_reset_confirmation.html <https://github.com/matrix-org/synapse/blob/develop/synapse/res/templates/password_reset_confirmation.html>_, +has been added to the synapse/res/templates directory. If you are using a +custom template directory, you may want to copy the template over and modify it.

+

Note that as of v1.20.0, templates do not need to be included in custom template +directories for Synapse to start. The default templates will be used if a custom +template cannot be found.

+

This page will appear to the user after clicking a password reset link that has +been emailed to them.

+

To complete password reset, the page must include a way to make a POST +request to +/_synapse/client/password_reset/{medium}/submit_token +with the query parameters from the original link, presented as a URL-encoded form. See the file +itself for more details.

+

Updated Single Sign-on HTML Templates

+

The saml_error.html template was removed from Synapse and replaced with the +sso_error.html template. If your Synapse is configured to use SAML and a +custom sso_redirect_confirm_template_dir configuration then any customisations +of the saml_error.html template will need to be merged into the sso_error.html +template. These templates are similar, but the parameters are slightly different:

+
    +
  • The msg parameter should be renamed to error_description.
  • +
  • There is no longer a code parameter for the response code.
  • +
  • A string error parameter is available that includes a short hint of why a +user is seeing the error page.
  • +
+

Upgrading to v1.18.0

+

Docker -py3 suffix will be removed in future versions

+

From 10th August 2020, we will no longer publish Docker images with the -py3 tag suffix. The images tagged with the -py3 suffix have been identical to the non-suffixed tags since release 0.99.0, and the suffix is obsolete.

+

On 10th August, we will remove the latest-py3 tag. Existing per-release tags (such as v1.18.0-py3) will not be removed, but no new -py3 tags will be added.

+

Scripts relying on the -py3 suffix will need to be updated.

+ +

When setting up worker processes, we now recommend the use of a Redis server for replication. The old direct TCP connection method is deprecated and will be removed in a future release. +See docs/workers.md <docs/workers.md>_ for more details.

+

Upgrading to v1.14.0

+

This version includes a database update which is run as part of the upgrade, +and which may take a couple of minutes in the case of a large server. Synapse +will not respond to HTTP requests while this update is taking place.

+

Upgrading to v1.13.0

+

Incorrect database migration in old synapse versions

+

A bug was introduced in Synapse 1.4.0 which could cause the room directory to +be incomplete or empty if Synapse was upgraded directly from v1.2.1 or +earlier, to versions between v1.4.0 and v1.12.x.

+

This will not be a problem for Synapse installations which were:

+
    +
  • created at v1.4.0 or later,
  • +
  • upgraded via v1.3.x, or
  • +
  • upgraded straight from v1.2.1 or earlier to v1.13.0 or later.
  • +
+

If completeness of the room directory is a concern, installations which are +affected can be repaired as follows:

+
    +
  1. +

    Run the following sql from a psql or sqlite3 console:

    +

    .. code:: sql

    +

    INSERT INTO background_updates (update_name, progress_json, depends_on) VALUES +('populate_stats_process_rooms', '{}', 'current_state_events_membership');

    +

    INSERT INTO background_updates (update_name, progress_json, depends_on) VALUES +('populate_stats_process_users', '{}', 'populate_stats_process_rooms');

    +
  2. +
  3. +

    Restart synapse.

    +
  4. +
+

New Single Sign-on HTML Templates

+

New templates (sso_auth_confirm.html, sso_auth_success.html, and +sso_account_deactivated.html) were added to Synapse. If your Synapse is +configured to use SSO and a custom sso_redirect_confirm_template_dir +configuration then these templates will need to be copied from +synapse/res/templates <synapse/res/templates>_ into that directory.

+

Synapse SSO Plugins Method Deprecation

+

Plugins using the complete_sso_login method of +synapse.module_api.ModuleApi should update to using the async/await +version complete_sso_login_async which includes additional checks. The +non-async version is considered deprecated.

+

Rolling back to v1.12.4 after a failed upgrade

+

v1.13.0 includes a lot of large changes. If something problematic occurs, you +may want to roll-back to a previous version of Synapse. Because v1.13.0 also +includes a new database schema version, reverting that version is also required +alongside the generic rollback instructions mentioned above. In short, to roll +back to v1.12.4 you need to:

+
    +
  1. +

    Stop the server

    +
  2. +
  3. +

    Decrease the schema version in the database:

    +

    .. code:: sql

    +

    UPDATE schema_version SET version = 57;

    +
  4. +
  5. +

    Downgrade Synapse by following the instructions for your installation method +in the "Rolling back to older versions" section above.

    +
  6. +
+

Upgrading to v1.12.0

+

This version includes a database update which is run as part of the upgrade, +and which may take some time (several hours in the case of a large +server). Synapse will not respond to HTTP requests while this update is taking +place.

+

This is only likely to be a problem in the case of a server which is +participating in many rooms.

+
    +
  1. +

    As with all upgrades, it is recommended that you have a recent backup of +your database which can be used for recovery in the event of any problems.

    +
  2. +
  3. +

    As an initial check to see if you will be affected, you can try running the +following query from the psql or sqlite3 console. It is safe to run it +while Synapse is still running.

    +

    .. code:: sql

    +

    SELECT MAX(q.v) FROM ( +SELECT ( +SELECT ej.json AS v +FROM state_events se INNER JOIN event_json ej USING (event_id) +WHERE se.room_id=rooms.room_id AND se.type='m.room.create' AND se.state_key='' +LIMIT 1 +) FROM rooms WHERE rooms.room_version IS NULL +) q;

    +

    This query will take about the same amount of time as the upgrade process: ie, +if it takes 5 minutes, then it is likely that Synapse will be unresponsive for +5 minutes during the upgrade.

    +

    If you consider an outage of this duration to be acceptable, no further +action is necessary and you can simply start Synapse 1.12.0.

    +

    If you would prefer to reduce the downtime, continue with the steps below.

    +
  4. +
  5. +

    The easiest workaround for this issue is to manually +create a new index before upgrading. On PostgreSQL, his can be done as follows:

    +

    .. code:: sql

    +

    CREATE INDEX CONCURRENTLY tmp_upgrade_1_12_0_index +ON state_events(room_id) WHERE type = 'm.room.create';

    +

    The above query may take some time, but is also safe to run while Synapse is +running.

    +

    We assume that no SQLite users have databases large enough to be +affected. If you are affected, you can run a similar query, omitting the +CONCURRENTLY keyword. Note however that this operation may in itself cause +Synapse to stop running for some time. Synapse admins are reminded that +SQLite is not recommended for use outside a test environment <https://github.com/matrix-org/synapse/blob/master/README.rst#using-postgresql>_.

    +
  6. +
  7. +

    Once the index has been created, the SELECT query in step 1 above should +complete quickly. It is therefore safe to upgrade to Synapse 1.12.0.

    +
  8. +
  9. +

    Once Synapse 1.12.0 has successfully started and is responding to HTTP +requests, the temporary index can be removed:

    +

    .. code:: sql

    +

    DROP INDEX tmp_upgrade_1_12_0_index;

    +
  10. +
+

Upgrading to v1.10.0

+

Synapse will now log a warning on start up if used with a PostgreSQL database +that has a non-recommended locale set.

+

See docs/postgres.md <docs/postgres.md>_ for details.

+

Upgrading to v1.8.0

+

Specifying a log_file config option will now cause Synapse to refuse to +start, and should be replaced by with the log_config option. Support for +the log_file option was removed in v1.3.0 and has since had no effect.

+

Upgrading to v1.7.0

+

In an attempt to configure Synapse in a privacy preserving way, the default +behaviours of allow_public_rooms_without_auth and +allow_public_rooms_over_federation have been inverted. This means that by +default, only authenticated users querying the Client/Server API will be able +to query the room directory, and relatedly that the server will not share +room directory information with other servers over federation.

+

If your installation does not explicitly set these settings one way or the other +and you want either setting to be true then it will necessary to update +your homeserver configuration file accordingly.

+

For more details on the surrounding context see our explainer <https://matrix.org/blog/2019/11/09/avoiding-unwelcome-visitors-on-private-matrix-servers>_.

+

Upgrading to v1.5.0

+

This release includes a database migration which may take several minutes to +complete if there are a large number (more than a million or so) of entries in +the devices table. This is only likely to a be a problem on very large +installations.

+

Upgrading to v1.4.0

+

New custom templates

+

If you have configured a custom template directory with the +email.template_dir option, be aware that there are new templates regarding +registration and threepid management (see below) that must be included.

+
    +
  • registration.html and registration.txt
  • +
  • registration_success.html and registration_failure.html
  • +
  • add_threepid.html and add_threepid.txt
  • +
  • add_threepid_failure.html and add_threepid_success.html
  • +
+

Synapse will expect these files to exist inside the configured template +directory, and will fail to start if they are absent. +To view the default templates, see synapse/res/templates <https://github.com/matrix-org/synapse/tree/master/synapse/res/templates>_.

+

3pid verification changes

+

Note: As of this release, users will be unable to add phone numbers or email +addresses to their accounts, without changes to the Synapse configuration. This +includes adding an email address during registration.

+

It is possible for a user to associate an email address or phone number +with their account, for a number of reasons:

+
    +
  • for use when logging in, as an alternative to the user id.
  • +
  • in the case of email, as an alternative contact to help with account recovery.
  • +
  • in the case of email, to receive notifications of missed messages.
  • +
+

Before an email address or phone number can be added to a user's account, +or before such an address is used to carry out a password-reset, Synapse must +confirm the operation with the owner of the email address or phone number. +It does this by sending an email or text giving the user a link or token to confirm +receipt. This process is known as '3pid verification'. ('3pid', or 'threepid', +stands for third-party identifier, and we use it to refer to external +identifiers such as email addresses and phone numbers.)

+

Previous versions of Synapse delegated the task of 3pid verification to an +identity server by default. In most cases this server is vector.im or +matrix.org.

+

In Synapse 1.4.0, for security and privacy reasons, the homeserver will no +longer delegate this task to an identity server by default. Instead, +the server administrator will need to explicitly decide how they would like the +verification messages to be sent.

+

In the medium term, the vector.im and matrix.org identity servers will +disable support for delegated 3pid verification entirely. However, in order to +ease the transition, they will retain the capability for a limited +period. Delegated email verification will be disabled on Monday 2nd December +2019 (giving roughly 2 months notice). Disabling delegated SMS verification +will follow some time after that once SMS verification support lands in +Synapse.

+

Once delegated 3pid verification support has been disabled in the vector.im and +matrix.org identity servers, all Synapse versions that depend on those +instances will be unable to verify email and phone numbers through them. There +are no imminent plans to remove delegated 3pid verification from Sydent +generally. (Sydent is the identity server project that backs the vector.im and +matrix.org instances).

+

Email

+
Following upgrade, to continue verifying email (e.g. as part of the
+registration process), admins can either:-
+
+* Configure Synapse to use an email server.
+* Run or choose an identity server which allows delegated email verification
+  and delegate to it.
+
+Configure SMTP in Synapse
++++++++++++++++++++++++++
+
+To configure an SMTP server for Synapse, modify the configuration section
+headed ``email``, and be sure to have at least the ``smtp_host, smtp_port``
+and ``notif_from`` fields filled out.
+
+You may also need to set ``smtp_user``, ``smtp_pass``, and
+``require_transport_security``.
+
+See the `sample configuration file <docs/sample_config.yaml>`_ for more details
+on these settings.
+
+Delegate email to an identity server
+++++++++++++++++++++++++++++++++++++
+
+Some admins will wish to continue using email verification as part of the
+registration process, but will not immediately have an appropriate SMTP server
+at hand.
+
+To this end, we will continue to support email verification delegation via the
+``vector.im`` and ``matrix.org`` identity servers for two months. Support for
+delegated email verification will be disabled on Monday 2nd December.
+
+The ``account_threepid_delegates`` dictionary defines whether the homeserver
+should delegate an external server (typically an `identity server
+<https://matrix.org/docs/spec/identity_service/r0.2.1>`_) to handle sending
+confirmation messages via email and SMS.
+
+So to delegate email verification, in ``homeserver.yaml``, set
+``account_threepid_delegates.email`` to the base URL of an identity server. For
+example:
+
+.. code:: yaml
+
+   account_threepid_delegates:
+       email: https://example.com     # Delegate email sending to example.com
+
+Note that ``account_threepid_delegates.email`` replaces the deprecated
+``email.trust_identity_server_for_password_resets``: if
+``email.trust_identity_server_for_password_resets`` is set to ``true``, and
+``account_threepid_delegates.email`` is not set, then the first entry in
+``trusted_third_party_id_servers`` will be used as the
+``account_threepid_delegate`` for email. This is to ensure compatibility with
+existing Synapse installs that set up external server handling for these tasks
+before v1.4.0. If ``email.trust_identity_server_for_password_resets`` is
+``true`` and no trusted identity server domains are configured, Synapse will
+report an error and refuse to start.
+
+If ``email.trust_identity_server_for_password_resets`` is ``false`` or absent
+and no ``email`` delegate is configured in ``account_threepid_delegates``,
+then Synapse will send email verification messages itself, using the configured
+SMTP server (see above).
+that type.
+
+Phone numbers
+
+

Synapse does not support phone-number verification itself, so the only way to +maintain the ability for users to add phone numbers to their accounts will be +by continuing to delegate phone number verification to the matrix.org and +vector.im identity servers (or another identity server that supports SMS +sending).

+

The account_threepid_delegates dictionary defines whether the homeserver +should delegate an external server (typically an identity server <https://matrix.org/docs/spec/identity_service/r0.2.1>_) to handle sending +confirmation messages via email and SMS.

+

So to delegate phone number verification, in homeserver.yaml, set +account_threepid_delegates.msisdn to the base URL of an identity +server. For example:

+

.. code:: yaml

+

account_threepid_delegates: +msisdn: https://example.com # Delegate sms sending to example.com

+

The matrix.org and vector.im identity servers will continue to support +delegated phone number verification via SMS until such time as it is possible +for admins to configure their servers to perform phone number verification +directly. More details will follow in a future release.

+

Rolling back to v1.3.1

+

If you encounter problems with v1.4.0, it should be possible to roll back to +v1.3.1, subject to the following:

+
    +
  • +

    The 'room statistics' engine was heavily reworked in this release (see +#5971 <https://github.com/matrix-org/synapse/pull/5971>_), including +significant changes to the database schema, which are not easily +reverted. This will cause the room statistics engine to stop updating when +you downgrade.

    +

    The room statistics are essentially unused in v1.3.1 (in future versions of +Synapse, they will be used to populate the room directory), so there should +be no loss of functionality. However, the statistics engine will write errors +to the logs, which can be avoided by setting the following in +homeserver.yaml:

    +

    .. code:: yaml

    +

    stats: +enabled: false

    +

    Don't forget to re-enable it when you upgrade again, in preparation for its +use in the room directory!

    +
  • +
+

Upgrading to v1.2.0

+

Some counter metrics have been renamed, with the old names deprecated. See +the metrics documentation <docs/metrics-howto.md#renaming-of-metrics--deprecation-of-old-names-in-12>_ +for details.

+

Upgrading to v1.1.0

+

Synapse v1.1.0 removes support for older Python and PostgreSQL versions, as +outlined in our deprecation notice <https://matrix.org/blog/2019/04/08/synapse-deprecating-postgres-9-4-and-python-2-x>_.

+

Minimum Python Version

+

Synapse v1.1.0 has a minimum Python requirement of Python 3.5. Python 3.6 or +Python 3.7 are recommended as they have improved internal string handling, +significantly reducing memory usage.

+

If you use current versions of the Matrix.org-distributed Debian packages or +Docker images, action is not required.

+

If you install Synapse in a Python virtual environment, please see "Upgrading to +v0.34.0" for notes on setting up a new virtualenv under Python 3.

+

Minimum PostgreSQL Version

+

If using PostgreSQL under Synapse, you will need to use PostgreSQL 9.5 or above. +Please see the +PostgreSQL documentation <https://www.postgresql.org/docs/11/upgrading.html>_ +for more details on upgrading your database.

+

Upgrading to v1.0

+

Validation of TLS certificates

+

Synapse v1.0 is the first release to enforce +validation of TLS certificates for the federation API. It is therefore +essential that your certificates are correctly configured. See the FAQ <docs/MSC1711_certificates_FAQ.md>_ for more information.

+

Note, v1.0 installations will also no longer be able to federate with servers +that have not correctly configured their certificates.

+

In rare cases, it may be desirable to disable certificate checking: for +example, it might be essential to be able to federate with a given legacy +server in a closed federation. This can be done in one of two ways:-

+
    +
  • Configure the global switch federation_verify_certificates to false.
  • +
  • Configure a whitelist of server domains to trust via federation_certificate_verification_whitelist.
  • +
+

See the sample configuration file <docs/sample_config.yaml>_ +for more details on these settings.

+

Email

+

When a user requests a password reset, Synapse will send an email to the +user to confirm the request.

+

Previous versions of Synapse delegated the job of sending this email to an +identity server. If the identity server was somehow malicious or became +compromised, it would be theoretically possible to hijack an account through +this means.

+

Therefore, by default, Synapse v1.0 will send the confirmation email itself. If +Synapse is not configured with an SMTP server, password reset via email will be +disabled.

+

To configure an SMTP server for Synapse, modify the configuration section +headed email, and be sure to have at least the smtp_host, smtp_port +and notif_from fields filled out. You may also need to set smtp_user, +smtp_pass, and require_transport_security.

+

If you are absolutely certain that you wish to continue using an identity +server for password resets, set trust_identity_server_for_password_resets to true.

+

See the sample configuration file <docs/sample_config.yaml>_ +for more details on these settings.

+

New email templates

+

Some new templates have been added to the default template directory for the purpose of the +homeserver sending its own password reset emails. If you have configured a custom +template_dir in your Synapse config, these files will need to be added.

+

password_reset.html and password_reset.txt are HTML and plain text templates +respectively that contain the contents of what will be emailed to the user upon attempting to +reset their password via email. password_reset_success.html and +password_reset_failure.html are HTML files that the content of which (assuming no redirect +URL is set) will be shown to the user after they attempt to click the link in the email sent +to them.

+

Upgrading to v0.99.0

+

Please be aware that, before Synapse v1.0 is released around March 2019, you +will need to replace any self-signed certificates with those verified by a +root CA. Information on how to do so can be found at the ACME docs <docs/ACME.md>_.

+

For more information on configuring TLS certificates see the FAQ <docs/MSC1711_certificates_FAQ.md>_.

+

Upgrading to v0.34.0

+
    +
  1. +

    This release is the first to fully support Python 3. Synapse will now run on +Python versions 3.5, or 3.6 (as well as 2.7). We recommend switching to +Python 3, as it has been shown to give performance improvements.

    +

    For users who have installed Synapse into a virtualenv, we recommend doing +this by creating a new virtualenv. For example::

    +
    virtualenv -p python3 ~/synapse/env3
    +source ~/synapse/env3/bin/activate
    +pip install matrix-synapse
    +
    +

    You can then start synapse as normal, having activated the new virtualenv::

    +
    cd ~/synapse
    +source env3/bin/activate
    +synctl start
    +
    +

    Users who have installed from distribution packages should see the relevant +package documentation. See below for notes on Debian packages.

    +
      +
    • +

      When upgrading to Python 3, you must make sure that your log files are +configured as UTF-8, by adding encoding: utf8 to the +RotatingFileHandler configuration (if you have one) in your +<server>.log.config file. For example, if your log.config file +contains::

      +

      handlers: +file: +class: logging.handlers.RotatingFileHandler +formatter: precise +filename: homeserver.log +maxBytes: 104857600 +backupCount: 10 +filters: [context] +console: +class: logging.StreamHandler +formatter: precise +filters: [context]

      +

      Then you should update this to be::

      +

      handlers: +file: +class: logging.handlers.RotatingFileHandler +formatter: precise +filename: homeserver.log +maxBytes: 104857600 +backupCount: 10 +filters: [context] +encoding: utf8 +console: +class: logging.StreamHandler +formatter: precise +filters: [context]

      +

      There is no need to revert this change if downgrading to Python 2.

      +
    • +
    +

    We are also making available Debian packages which will run Synapse on +Python 3. You can switch to these packages with apt-get install matrix-synapse-py3, however, please read debian/NEWS <https://github.com/matrix-org/synapse/blob/release-v0.34.0/debian/NEWS>_ +before doing so. The existing matrix-synapse packages will continue to +use Python 2 for the time being.

    +
  2. +
  3. +

    This release removes the riot.im from the default list of trusted +identity servers.

    +

    If riot.im is in your homeserver's list of +trusted_third_party_id_servers, you should remove it. It was added in +case a hypothetical future identity server was put there. If you don't +remove it, users may be unable to deactivate their accounts.

    +
  4. +
  5. +

    This release no longer installs the (unmaintained) Matrix Console web client +as part of the default installation. It is possible to re-enable it by +installing it separately and setting the web_client_location config +option, but please consider switching to another client.

    +
  6. +
+

Upgrading to v0.33.7

+

This release removes the example email notification templates from +res/templates (they are now internal to the python package). This should +only affect you if you (a) deploy your Synapse instance from a git checkout or +a github snapshot URL, and (b) have email notifications enabled.

+

If you have email notifications enabled, you should ensure that +email.template_dir is either configured to point at a directory where you +have installed customised templates, or leave it unset to use the default +templates.

+

Upgrading to v0.27.3

+

This release expands the anonymous usage stats sent if the opt-in +report_stats configuration is set to true. We now capture RSS memory +and cpu use at a very coarse level. This requires administrators to install +the optional psutil python module.

+

We would appreciate it if you could assist by ensuring this module is available +and report_stats is enabled. This will let us see if performance changes to +synapse are having an impact to the general community.

+

Upgrading to v0.15.0

+

If you want to use the new URL previewing API (/_matrix/media/r0/preview_url) +then you have to explicitly enable it in the config and update your dependencies +dependencies. See README.rst for details.

+

Upgrading to v0.11.0

+

This release includes the option to send anonymous usage stats to matrix.org, +and requires that administrators explictly opt in or out by setting the +report_stats option to either true or false.

+

We would really appreciate it if you could help our project out by reporting +anonymized usage statistics from your homeserver. Only very basic aggregate +data (e.g. number of users) will be reported, but it helps us to track the +growth of the Matrix community, and helps us to make Matrix a success, as well +as to convince other networks that they should peer with us.

+

Upgrading to v0.9.0

+

Application services have had a breaking API change in this version.

+

They can no longer register themselves with a home server using the AS HTTP API. This +decision was made because a compromised application service with free reign to register +any regex in effect grants full read/write access to the home server if a regex of .* +is used. An attack where a compromised AS re-registers itself with .* was deemed too +big of a security risk to ignore, and so the ability to register with the HS remotely has +been removed.

+

It has been replaced by specifying a list of application service registrations in +homeserver.yaml::

+

app_service_config_files: ["registration-01.yaml", "registration-02.yaml"]

+

Where registration-01.yaml looks like::

+

url: # e.g. "https://my.application.service.com" +as_token: +hs_token: +sender_localpart: # This is a new field which denotes the user_id localpart when using the AS token +namespaces: +users: +- exclusive: +regex: # e.g. "@prefix_.*" +aliases: +- exclusive: +regex: +rooms: +- exclusive: +regex:

+

Upgrading to v0.8.0

+

Servers which use captchas will need to add their public key to::

+

static/client/register/register_config.js

+
window.matrixRegistrationConfig = {
+    recaptcha_public_key: "YOUR_PUBLIC_KEY"
+};
+
+

This is required in order to support registration fallback (typically used on +mobile devices).

+

Upgrading to v0.7.0

+

New dependencies are:

+
    +
  • pydenticon
  • +
  • simplejson
  • +
  • syutil
  • +
  • matrix-angular-sdk
  • +
+

To pull in these dependencies in a virtual env, run::

+
python synapse/python_dependencies.py | xargs -n 1 pip install
+
+

Upgrading to v0.6.0

+

To pull in new dependencies, run::

+
python setup.py develop --user
+
+

This update includes a change to the database schema. To upgrade you first need +to upgrade the database by running::

+
python scripts/upgrade_db_to_v0.6.0.py <db> <server_name> <signing_key>
+
+

Where <db> is the location of the database, <server_name> is the +server name as specified in the synapse configuration, and <signing_key> is +the location of the signing key as specified in the synapse configuration.

+

This may take some time to complete. Failures of signatures and content hashes +can safely be ignored.

+

Upgrading to v0.5.1

+

Depending on precisely when you installed v0.5.0 you may have ended up with +a stale release of the reference matrix webclient installed as a python module. +To uninstall it and ensure you are depending on the latest module, please run::

+
$ pip uninstall syweb
+
+

Upgrading to v0.5.0

+

The webclient has been split out into a seperate repository/pacakage in this +release. Before you restart your homeserver you will need to pull in the +webclient package by running::

+

python setup.py develop --user

+

This release completely changes the database schema and so requires upgrading +it before starting the new version of the homeserver.

+

The script "database-prepare-for-0.5.0.sh" should be used to upgrade the +database. This will save all user information, such as logins and profiles, +but will otherwise purge the database. This includes messages, which +rooms the home server was a member of and room alias mappings.

+

If you would like to keep your history, please take a copy of your database +file and ask for help in #matrix:matrix.org. The upgrade process is, +unfortunately, non trivial and requires human intervention to resolve any +resulting conflicts during the upgrade process.

+

Before running the command the homeserver should be first completely +shutdown. To run it, simply specify the location of the database, e.g.:

+

./scripts/database-prepare-for-0.5.0.sh "homeserver.db"

+

Once this has successfully completed it will be safe to restart the +homeserver. You may notice that the homeserver takes a few seconds longer to +restart than usual as it reinitializes the database.

+

On startup of the new version, users can either rejoin remote rooms using room +aliases or by being reinvited. Alternatively, if any other homeserver sends a +message to a room that the homeserver was previously in the local HS will +automatically rejoin the room.

+

Upgrading to v0.4.0

+

This release needs an updated syutil version. Run::

+
python setup.py develop
+
+

You will also need to upgrade your configuration as the signing key format has +changed. Run::

+
python -m synapse.app.homeserver --config-path <CONFIG> --generate-config
+
+

Upgrading to v0.3.0

+

This registration API now closely matches the login API. This introduces a bit +more backwards and forwards between the HS and the client, but this improves +the overall flexibility of the API. You can now GET on /register to retrieve a list +of valid registration flows. Upon choosing one, they are submitted in the same +way as login, e.g::

+

{ +type: m.login.password, +user: foo, +password: bar +}

+

The default HS supports 2 flows, with and without Identity Server email +authentication. Enabling captcha on the HS will add in an extra step to all +flows: m.login.recaptcha which must be completed before you can transition +to the next stage. There is a new login type: m.login.email.identity which +contains the threepidCreds key which were previously sent in the original +register request. For more information on this, see the specification.

+

Web Client

+

The VoIP specification has changed between v0.2.0 and v0.3.0. Users should +refresh any browser tabs to get the latest web client code. Users on +v0.2.0 of the web client will not be able to call those on v0.3.0 and +vice versa.

+

Upgrading to v0.2.0

+

The home server now requires setting up of SSL config before it can run. To +automatically generate default config use::

+
$ python synapse/app/homeserver.py \
+    --server-name machine.my.domain.name \
+    --bind-port 8448 \
+    --config-path homeserver.config \
+    --generate-config
+
+

This config can be edited if desired, for example to specify a different SSL +certificate to use. Once done you can run the home server using::

+
$ python synapse/app/homeserver.py --config-path homeserver.config
+
+

See the README.rst for more information.

+

Also note that some config options have been renamed, including:

+
    +
  • "host" to "server-name"
  • +
  • "database" to "database-path"
  • +
  • "port" to "bind-port" and "unsecure-port"
  • +
+

Upgrading to v0.0.1

+

This release completely changes the database schema and so requires upgrading +it before starting the new version of the homeserver.

+

The script "database-prepare-for-0.0.1.sh" should be used to upgrade the +database. This will save all user information, such as logins and profiles, +but will otherwise purge the database. This includes messages, which +rooms the home server was a member of and room alias mappings.

+

Before running the command the homeserver should be first completely +shutdown. To run it, simply specify the location of the database, e.g.:

+

./scripts/database-prepare-for-0.0.1.sh "homeserver.db"

+

Once this has successfully completed it will be safe to restart the +homeserver. You may notice that the homeserver takes a few seconds longer to +restart than usual as it reinitializes the database.

+

On startup of the new version, users can either rejoin remote rooms using room +aliases or by being reinvited. Alternatively, if any other homeserver sends a +message to a room that the homeserver was previously in the local HS will +automatically rejoin the room.

+

MSC1711 Certificates FAQ

+

Historical Note

+

This document was originally written to guide server admins through the upgrade +path towards Synapse 1.0. Specifically, +MSC1711 +required that all servers present valid TLS certificates on their federation +API. Admins were encouraged to achieve compliance from version 0.99.0 (released +in February 2019) ahead of version 1.0 (released June 2019) enforcing the +certificate checks.

+

Much of what follows is now outdated since most admins will have already +upgraded, however it may be of use to those with old installs returning to the +project.

+

If you are setting up a server from scratch you almost certainly should look at +the installation guide instead.

+

Introduction

+

The goal of Synapse 0.99.0 is to act as a stepping stone to Synapse 1.0.0. It +supports the r0.1 release of the server to server specification, but is +compatible with both the legacy Matrix federation behaviour (pre-r0.1) as well +as post-r0.1 behaviour, in order to allow for a smooth upgrade across the +federation.

+

The most important thing to know is that Synapse 1.0.0 will require a valid TLS +certificate on federation endpoints. Self signed certificates will not be +sufficient.

+

Synapse 0.99.0 makes it easy to configure TLS certificates and will +interoperate with both >= 1.0.0 servers as well as existing servers yet to +upgrade.

+

It is critical that all admins upgrade to 0.99.0 and configure a valid TLS +certificate. Admins will have 1 month to do so, after which 1.0.0 will be +released and those servers without a valid certificate will not longer be able +to federate with >= 1.0.0 servers.

+

Full details on how to carry out this configuration change is given +below. A +timeline and some frequently asked questions are also given below.

+

For more details and context on the release of the r0.1 Server/Server API and +imminent Matrix 1.0 release, you can also see our +main talk from FOSDEM 2019.

+

Contents

+
    +
  • Timeline
  • +
  • Configuring certificates for compatibility with Synapse 1.0
  • +
  • FAQ +
      +
    • Synapse 0.99.0 has just been released, what do I need to do right now?
    • +
    • How do I upgrade?
    • +
    • What will happen if I do not set up a valid federation certificate +immediately?
    • +
    • What will happen if I do nothing at all?
    • +
    • When do I need a SRV record or .well-known URI?
    • +
    • Can I still use an SRV record?
    • +
    • I have created a .well-known URI. Do I still need an SRV record?
    • +
    • It used to work just fine, why are you breaking everything?
    • +
    • Can I manage my own certificates rather than having Synapse renew +certificates itself?
    • +
    • Do you still recommend against using a reverse proxy on the federation port?
    • +
    • Do I still need to give my TLS certificates to Synapse if I am using a +reverse proxy?
    • +
    • Do I need the same certificate for the client and federation port?
    • +
    • How do I tell Synapse to reload my keys/certificates after I replace them?
    • +
    +
  • +
+

Timeline

+

5th Feb 2019 - Synapse 0.99.0 is released.

+

All server admins are encouraged to upgrade.

+

0.99.0:

+
    +
  • +

    provides support for ACME to make setting up Let's Encrypt certs easy, as +well as .well-known support.

    +
  • +
  • +

    does not enforce that a valid CA cert is present on the federation API, but +rather makes it easy to set one up.

    +
  • +
  • +

    provides support for .well-known

    +
  • +
+

Admins should upgrade and configure a valid CA cert. Homeservers that require a +.well-known entry (see below), should retain their SRV record and use it +alongside their .well-known record.

+

10th June 2019 - Synapse 1.0.0 is released

+

1.0.0 is scheduled for release on 10th June. In +accordance with the the S2S spec +1.0.0 will enforce certificate validity. This means that any homeserver without a +valid certificate after this point will no longer be able to federate with +1.0.0 servers.

+

Configuring certificates for compatibility with Synapse 1.0.0

+

If you do not currently have an SRV record

+

In this case, your server_name points to the host where your Synapse is +running. There is no need to create a .well-known URI or an SRV record, but +you will need to give Synapse a valid, signed, certificate.

+

The easiest way to do that is with Synapse's built-in ACME (Let's Encrypt) +support. Full details are in ACME.md but, in a nutshell:

+
    +
  1. Allow Synapse to listen on port 80 with authbind, or forward it from a +reverse proxy.
  2. +
  3. Enable acme support in homeserver.yaml.
  4. +
  5. Move your old certificates out of the way.
  6. +
  7. Restart Synapse.
  8. +
+

If you do have an SRV record currently

+

If you are using an SRV record, your matrix domain (server_name) may not +point to the same host that your Synapse is running on (the 'target +domain'). (If it does, you can follow the recommendation above; otherwise, read +on.)

+

Let's assume that your server_name is example.com, and your Synapse is +hosted at a target domain of customer.example.net. Currently you should have +an SRV record which looks like:

+
_matrix._tcp.example.com. IN SRV 10 5 8000 customer.example.net.
+
+

In this situation, you have three choices for how to proceed:

+

Option 1: give Synapse a certificate for your matrix domain

+

Synapse 1.0 will expect your server to present a TLS certificate for your +server_name (example.com in the above example). You can achieve this by +doing one of the following:

+
    +
  • +

    Acquire a certificate for the server_name yourself (for example, using +certbot), and give it and the key to Synapse via tls_certificate_path +and tls_private_key_path, or:

    +
  • +
  • +

    Use Synapse's ACME support, and forward port 80 on the +server_name domain to your Synapse instance.

    +
  • +
+

Option 2: run Synapse behind a reverse proxy

+

If you have an existing reverse proxy set up with correct TLS certificates for +your domain, you can simply route all traffic through the reverse proxy by +updating the SRV record appropriately (or removing it, if the proxy listens on +8448).

+

See reverse_proxy.md for information on setting up a +reverse proxy.

+

Option 3: add a .well-known file to delegate your matrix traffic

+

This will allow you to keep Synapse on a separate domain, without having to +give it a certificate for the matrix domain.

+

You can do this with a .well-known file as follows:

+
    +
  1. +

    Keep the SRV record in place - it is needed for backwards compatibility +with Synapse 0.34 and earlier.

    +
  2. +
  3. +

    Give Synapse a certificate corresponding to the target domain +(customer.example.net in the above example). You can either use Synapse's +built-in ACME support for this (via the domain parameter in +the acme section), or acquire a certificate yourself and give it to +Synapse via tls_certificate_path and tls_private_key_path.

    +
  4. +
  5. +

    Restart Synapse to ensure the new certificate is loaded.

    +
  6. +
  7. +

    Arrange for a .well-known file at +https://<server_name>/.well-known/matrix/server with contents:

    +
    {"m.server": "<target server name>"}
    +
    +

    where the target server name is resolved as usual (i.e. SRV lookup, falling +back to talking to port 8448).

    +

    In the above example, where synapse is listening on port 8000, +https://example.com/.well-known/matrix/server should have m.server set to one of:

    +
      +
    1. +

      customer.example.net ─ with a SRV record on +_matrix._tcp.customer.example.com pointing to port 8000, or:

      +
    2. +
    3. +

      customer.example.net ─ updating synapse to listen on the default port +8448, or:

      +
    4. +
    5. +

      customer.example.net:8000 ─ ensuring that if there is a reverse proxy +on customer.example.net:8000 it correctly handles HTTP requests with +Host header set to customer.example.net:8000.

      +
    6. +
    +
  8. +
+

FAQ

+

Synapse 0.99.0 has just been released, what do I need to do right now?

+

Upgrade as soon as you can in preparation for Synapse 1.0.0, and update your +TLS certificates as above.

+

What will happen if I do not set up a valid federation certificate immediately?

+

Nothing initially, but once 1.0.0 is in the wild it will not be possible to +federate with 1.0.0 servers.

+

What will happen if I do nothing at all?

+

If the admin takes no action at all, and remains on a Synapse < 0.99.0 then the +homeserver will be unable to federate with those who have implemented +.well-known. Then, as above, once the month upgrade window has expired the +homeserver will not be able to federate with any Synapse >= 1.0.0

+

When do I need a SRV record or .well-known URI?

+

If your homeserver listens on the default federation port (8448), and your +server_name points to the host that your homeserver runs on, you do not need an +SRV record or .well-known/matrix/server URI.

+

For instance, if you registered example.com and pointed its DNS A record at a +fresh Upcloud VPS or similar, you could install Synapse 0.99 on that host, +giving it a server_name of example.com, and it would automatically generate a +valid TLS certificate for you via Let's Encrypt and no SRV record or +.well-known URI would be needed.

+

This is the common case, although you can add an SRV record or +.well-known/matrix/server URI for completeness if you wish.

+

However, if your server does not listen on port 8448, or if your server_name +does not point to the host that your homeserver runs on, you will need to let +other servers know how to find it.

+

In this case, you should see "If you do have an SRV record +currently" above.

+

Can I still use an SRV record?

+

Firstly, if you didn't need an SRV record before (because your server is +listening on port 8448 of your server_name), you certainly don't need one now: +the defaults are still the same.

+

If you previously had an SRV record, you can keep using it provided you are +able to give Synapse a TLS certificate corresponding to your server name. For +example, suppose you had the following SRV record, which directs matrix traffic +for example.com to matrix.example.com:443:

+
_matrix._tcp.example.com. IN SRV 10 5 443 matrix.example.com
+
+

In this case, Synapse must be given a certificate for example.com - or be +configured to acquire one from Let's Encrypt.

+

If you are unable to give Synapse a certificate for your server_name, you will +also need to use a .well-known URI instead. However, see also "I have created a +.well-known URI. Do I still need an SRV record?".

+

I have created a .well-known URI. Do I still need an SRV record?

+

As of Synapse 0.99, Synapse will first check for the existence of a .well-known +URI and follow any delegation it suggests. It will only then check for the +existence of an SRV record.

+

That means that the SRV record will often be redundant. However, you should +remember that there may still be older versions of Synapse in the federation +which do not understand .well-known URIs, so if you removed your SRV record you +would no longer be able to federate with them.

+

It is therefore best to leave the SRV record in place for now. Synapse 0.34 and +earlier will follow the SRV record (and not care about the invalid +certificate). Synapse 0.99 and later will follow the .well-known URI, with the +correct certificate chain.

+

It used to work just fine, why are you breaking everything?

+

We have always wanted Matrix servers to be as easy to set up as possible, and +so back when we started federation in 2014 we didn't want admins to have to go +through the cumbersome process of buying a valid TLS certificate to run a +server. This was before Let's Encrypt came along and made getting a free and +valid TLS certificate straightforward. So instead, we adopted a system based on +Perspectives: an approach +where you check a set of "notary servers" (in practice, homeservers) to vouch +for the validity of a certificate rather than having it signed by a CA. As long +as enough different notaries agree on the certificate's validity, then it is +trusted.

+

However, in practice this has never worked properly. Most people only use the +default notary server (matrix.org), leading to inadvertent centralisation which +we want to eliminate. Meanwhile, we never implemented the full consensus +algorithm to query the servers participating in a room to determine consensus +on whether a given certificate is valid. This is fiddly to get right +(especially in face of sybil attacks), and we found ourselves questioning +whether it was worth the effort to finish the work and commit to maintaining a +secure certificate validation system as opposed to focusing on core Matrix +development.

+

Meanwhile, Let's Encrypt came along in 2016, and put the final nail in the +coffin of the Perspectives project (which was already pretty dead). So, the +Spec Core Team decided that a better approach would be to mandate valid TLS +certificates for federation alongside the rest of the Web. More details can be +found in +MSC1711.

+

This results in a breaking change, which is disruptive, but absolutely critical +for the security model. However, the existence of Let's Encrypt as a trivial +way to replace the old self-signed certificates with valid CA-signed ones helps +smooth things over massively, especially as Synapse can now automate Let's +Encrypt certificate generation if needed.

+

Can I manage my own certificates rather than having Synapse renew certificates itself?

+

Yes, you are welcome to manage your certificates yourself. Synapse will only +attempt to obtain certificates from Let's Encrypt if you configure it to do +so.The only requirement is that there is a valid TLS cert present for +federation end points.

+

Do you still recommend against using a reverse proxy on the federation port?

+

We no longer actively recommend against using a reverse proxy. Many admins will +find it easier to direct federation traffic to a reverse proxy and manage their +own TLS certificates, and this is a supported configuration.

+

See reverse_proxy.md for information on setting up a +reverse proxy.

+

Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?

+

Practically speaking, this is no longer necessary.

+

If you are using a reverse proxy for all of your TLS traffic, then you can set +no_tls: True. In that case, the only reason Synapse needs the certificate is +to populate a legacy 'tls_fingerprints' field in the federation API. This is +ignored by Synapse 0.99.0 and later, and the only time pre-0.99 Synapses will +check it is when attempting to fetch the server keys - and generally this is +delegated via matrix.org, which is on 0.99.0.

+

However, there is a bug in Synapse 0.99.0 +4554 which prevents +Synapse from starting if you do not give it a TLS certificate. To work around +this, you can give it any TLS certificate at all. This will be fixed soon.

+

Do I need the same certificate for the client and federation port?

+

No. There is nothing stopping you from using different certificates, +particularly if you are using a reverse proxy. However, Synapse will use the +same certificate on any ports where TLS is configured.

+

How do I tell Synapse to reload my keys/certificates after I replace them?

+

Synapse will reload the keys and certificates when it receives a SIGHUP - for +example kill -HUP $(cat homeserver.pid). Alternatively, simply restart +Synapse, though this will result in downtime while it restarts.

+

Setting up federation

+

Federation is the process by which users on different servers can participate +in the same room. For this to work, those other servers must be able to contact +yours to send messages.

+

The server_name configured in the Synapse configuration file (often +homeserver.yaml) defines how resources (users, rooms, etc.) will be +identified (eg: @user:example.com, #room:example.com). By default, +it is also the domain that other servers will use to try to reach your +server (via port 8448). This is easy to set up and will work provided +you set the server_name to match your machine's public DNS hostname.

+

For this default configuration to work, you will need to listen for TLS +connections on port 8448. The preferred way to do that is by using a +reverse proxy: see reverse_proxy.md for instructions +on how to correctly set one up.

+

In some cases you might not want to run Synapse on the machine that has +the server_name as its public DNS hostname, or you might want federation +traffic to use a different port than 8448. For example, you might want to +have your user names look like @user:example.com, but you want to run +Synapse on synapse.example.com on port 443. This can be done using +delegation, which allows an admin to control where federation traffic should +be sent. See delegate.md for instructions on how to set this up.

+

Once federation has been configured, you should be able to join a room over +federation. A good place to start is #synapse:matrix.org - a room for +Synapse admins.

+

Troubleshooting

+

You can use the federation tester +to check if your homeserver is configured correctly. Alternatively try the +JSON API used by the federation tester. +Note that you'll have to modify this URL to replace DOMAIN with your +server_name. Hitting the API directly provides extra detail.

+

The typical failure mode for federation is that when the server tries to join +a room, it is rejected with "401: Unauthorized". Generally this means that other +servers in the room could not access yours. (Joining a room over federation is +a complicated dance which requires connections in both directions).

+

Another common problem is that people on other servers can't join rooms that +you invite them to. This can be caused by an incorrectly-configured reverse +proxy: see reverse_proxy.md for instructions on how to correctly +configure a reverse proxy.

+

Known issues

+

HTTP 308 Permanent Redirect redirects are not followed: Due to missing features +in the HTTP library used by Synapse, 308 redirects are currently not followed by +federating servers, which can cause M_UNKNOWN or 401 Unauthorized errors. This +may affect users who are redirecting apex-to-www (e.g. example.com -> www.example.com), +and especially users of the Kubernetes Nginx Ingress module, which uses 308 redirect +codes by default. For those Kubernetes users, this Stackoverflow post +might be helpful. For other users, switching to a 301 Moved Permanently code may be +an option. 308 redirect codes will be supported properly in a future +release of Synapse.

+

Running a demo federation of Synapses

+

If you want to get up and running quickly with a trio of homeservers in a +private federation, there is a script in the demo directory. This is mainly +useful just for development purposes. See demo/README.

+

Configuration

+

This section contains information on tweaking Synapse via the various options in the configuration file. A configuration +file should have been generated when you installed Synapse.

+

Homeserver Sample Configuration File

+

Below is a sample homeserver configuration file. The homeserver configuration file +can be tweaked to change the behaviour of your homeserver. A restart of the server is +generally required to apply any changes made to this file.

+

Note that the contents below are not intended to be copied and used as the basis for +a real homeserver.yaml. Instead, if you are starting from scratch, please generate +a fresh config using Synapse by following the instructions in +Installation.

+
# This file is maintained as an up-to-date snapshot of the default
+# homeserver.yaml configuration generated by Synapse.
+#
+# It is intended to act as a reference for the default configuration,
+# helping admins keep track of new options and other changes, and compare
+# their configs with the current default.  As such, many of the actual
+# config values shown are placeholders.
+#
+# It is *not* intended to be copied and used as the basis for a real
+# homeserver.yaml. Instead, if you are starting from scratch, please generate
+# a fresh config using Synapse by following the instructions in INSTALL.md.
+
+# Configuration options that take a time period can be set using a number
+# followed by a letter. Letters have the following meanings:
+# s = second
+# m = minute
+# h = hour
+# d = day
+# w = week
+# y = year
+# For example, setting redaction_retention_period: 5m would remove redacted
+# messages from the database after 5 minutes, rather than 5 months.
+
+################################################################################
+
+# Configuration file for Synapse.
+#
+# This is a YAML file: see [1] for a quick introduction. Note in particular
+# that *indentation is important*: all the elements of a list or dictionary
+# should have the same indentation.
+#
+# [1] https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
+
+## Server ##
+
+# The public-facing domain of the server
+#
+# The server_name name will appear at the end of usernames and room addresses
+# created on this server. For example if the server_name was example.com,
+# usernames on this server would be in the format @user:example.com
+#
+# In most cases you should avoid using a matrix specific subdomain such as
+# matrix.example.com or synapse.example.com as the server_name for the same
+# reasons you wouldn't use user@email.example.com as your email address.
+# See https://github.com/matrix-org/synapse/blob/master/docs/delegate.md
+# for information on how to host Synapse on a subdomain while preserving
+# a clean server_name.
+#
+# The server_name cannot be changed later so it is important to
+# configure this correctly before you start Synapse. It should be all
+# lowercase and may contain an explicit port.
+# Examples: matrix.org, localhost:8080
+#
+server_name: "SERVERNAME"
+
+# When running as a daemon, the file to store the pid in
+#
+pid_file: DATADIR/homeserver.pid
+
+# The absolute URL to the web client which /_matrix/client will redirect
+# to if 'webclient' is configured under the 'listeners' configuration.
+#
+# This option can be also set to the filesystem path to the web client
+# which will be served at /_matrix/client/ if 'webclient' is configured
+# under the 'listeners' configuration, however this is a security risk:
+# https://github.com/matrix-org/synapse#security-note
+#
+#web_client_location: https://riot.example.com/
+
+# The public-facing base URL that clients use to access this Homeserver (not
+# including _matrix/...). This is the same URL a user might enter into the
+# 'Custom Homeserver URL' field on their client. If you use Synapse with a
+# reverse proxy, this should be the URL to reach Synapse via the proxy.
+# Otherwise, it should be the URL to reach Synapse's client HTTP listener (see
+# 'listeners' below).
+#
+#public_baseurl: https://example.com/
+
+# Set the soft limit on the number of file descriptors synapse can use
+# Zero is used to indicate synapse should set the soft limit to the
+# hard limit.
+#
+#soft_file_limit: 0
+
+# Presence tracking allows users to see the state (e.g online/offline)
+# of other local and remote users.
+#
+presence:
+  # Uncomment to disable presence tracking on this homeserver. This option
+  # replaces the previous top-level 'use_presence' option.
+  #
+  #enabled: false
+
+  # Presence routers are third-party modules that can specify additional logic
+  # to where presence updates from users are routed.
+  #
+  presence_router:
+    # The custom module's class. Uncomment to use a custom presence router module.
+    #
+    #module: "my_custom_router.PresenceRouter"
+
+    # Configuration options of the custom module. Refer to your module's
+    # documentation for available options.
+    #
+    #config:
+    #  example_option: 'something'
+
+# Whether to require authentication to retrieve profile data (avatars,
+# display names) of other users through the client API. Defaults to
+# 'false'. Note that profile data is also available via the federation
+# API, unless allow_profile_lookup_over_federation is set to false.
+#
+#require_auth_for_profile_requests: true
+
+# Uncomment to require a user to share a room with another user in order
+# to retrieve their profile information. Only checked on Client-Server
+# requests. Profile requests from other servers should be checked by the
+# requesting server. Defaults to 'false'.
+#
+#limit_profile_requests_to_users_who_share_rooms: true
+
+# Uncomment to prevent a user's profile data from being retrieved and
+# displayed in a room until they have joined it. By default, a user's
+# profile data is included in an invite event, regardless of the values
+# of the above two settings, and whether or not the users share a server.
+# Defaults to 'true'.
+#
+#include_profile_data_on_invite: false
+
+# If set to 'true', removes the need for authentication to access the server's
+# public rooms directory through the client API, meaning that anyone can
+# query the room directory. Defaults to 'false'.
+#
+#allow_public_rooms_without_auth: true
+
+# If set to 'true', allows any other homeserver to fetch the server's public
+# rooms directory via federation. Defaults to 'false'.
+#
+#allow_public_rooms_over_federation: true
+
+# The default room version for newly created rooms.
+#
+# Known room versions are listed here:
+# https://matrix.org/docs/spec/#complete-list-of-room-versions
+#
+# For example, for room version 1, default_room_version should be set
+# to "1".
+#
+#default_room_version: "6"
+
+# The GC threshold parameters to pass to `gc.set_threshold`, if defined
+#
+#gc_thresholds: [700, 10, 10]
+
+# The minimum time in seconds between each GC for a generation, regardless of
+# the GC thresholds. This ensures that we don't do GC too frequently.
+#
+# A value of `[1s, 10s, 30s]` indicates that a second must pass between consecutive
+# generation 0 GCs, etc.
+#
+# Defaults to `[1s, 10s, 30s]`.
+#
+#gc_min_interval: [0.5s, 30s, 1m]
+
+# Set the limit on the returned events in the timeline in the get
+# and sync operations. The default value is 100. -1 means no upper limit.
+#
+# Uncomment the following to increase the limit to 5000.
+#
+#filter_timeline_limit: 5000
+
+# Whether room invites to users on this server should be blocked
+# (except those sent by local server admins). The default is False.
+#
+#block_non_admin_invites: true
+
+# Room searching
+#
+# If disabled, new messages will not be indexed for searching and users
+# will receive errors when searching for messages. Defaults to enabled.
+#
+#enable_search: false
+
+# Prevent outgoing requests from being sent to the following blacklisted IP address
+# CIDR ranges. If this option is not specified then it defaults to private IP
+# address ranges (see the example below).
+#
+# The blacklist applies to the outbound requests for federation, identity servers,
+# push servers, and for checking key validity for third-party invite events.
+#
+# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
+# listed here, since they correspond to unroutable addresses.)
+#
+# This option replaces federation_ip_range_blacklist in Synapse v1.25.0.
+#
+#ip_range_blacklist:
+#  - '127.0.0.0/8'
+#  - '10.0.0.0/8'
+#  - '172.16.0.0/12'
+#  - '192.168.0.0/16'
+#  - '100.64.0.0/10'
+#  - '192.0.0.0/24'
+#  - '169.254.0.0/16'
+#  - '192.88.99.0/24'
+#  - '198.18.0.0/15'
+#  - '192.0.2.0/24'
+#  - '198.51.100.0/24'
+#  - '203.0.113.0/24'
+#  - '224.0.0.0/4'
+#  - '::1/128'
+#  - 'fe80::/10'
+#  - 'fc00::/7'
+#  - '2001:db8::/32'
+#  - 'ff00::/8'
+#  - 'fec0::/10'
+
+# List of IP address CIDR ranges that should be allowed for federation,
+# identity servers, push servers, and for checking key validity for
+# third-party invite events. This is useful for specifying exceptions to
+# wide-ranging blacklisted target IP ranges - e.g. for communication with
+# a push server only visible in your network.
+#
+# This whitelist overrides ip_range_blacklist and defaults to an empty
+# list.
+#
+#ip_range_whitelist:
+#   - '192.168.1.1'
+
+# List of ports that Synapse should listen on, their purpose and their
+# configuration.
+#
+# Options for each listener include:
+#
+#   port: the TCP port to bind to
+#
+#   bind_addresses: a list of local addresses to listen on. The default is
+#       'all local interfaces'.
+#
+#   type: the type of listener. Normally 'http', but other valid options are:
+#       'manhole' (see docs/manhole.md),
+#       'metrics' (see docs/metrics-howto.md),
+#       'replication' (see docs/workers.md).
+#
+#   tls: set to true to enable TLS for this listener. Will use the TLS
+#       key/cert specified in tls_private_key_path / tls_certificate_path.
+#
+#   x_forwarded: Only valid for an 'http' listener. Set to true to use the
+#       X-Forwarded-For header as the client IP. Useful when Synapse is
+#       behind a reverse-proxy.
+#
+#   resources: Only valid for an 'http' listener. A list of resources to host
+#       on this port. Options for each resource are:
+#
+#       names: a list of names of HTTP resources. See below for a list of
+#           valid resource names.
+#
+#       compress: set to true to enable HTTP compression for this resource.
+#
+#   additional_resources: Only valid for an 'http' listener. A map of
+#        additional endpoints which should be loaded via dynamic modules.
+#
+# Valid resource names are:
+#
+#   client: the client-server API (/_matrix/client), and the synapse admin
+#       API (/_synapse/admin). Also implies 'media' and 'static'.
+#
+#   consent: user consent forms (/_matrix/consent). See
+#       docs/consent_tracking.md.
+#
+#   federation: the server-server API (/_matrix/federation). Also implies
+#       'media', 'keys', 'openid'
+#
+#   keys: the key discovery API (/_matrix/keys).
+#
+#   media: the media API (/_matrix/media).
+#
+#   metrics: the metrics interface. See docs/metrics-howto.md.
+#
+#   openid: OpenID authentication.
+#
+#   replication: the HTTP replication API (/_synapse/replication). See
+#       docs/workers.md.
+#
+#   static: static resources under synapse/static (/_matrix/static). (Mostly
+#       useful for 'fallback authentication'.)
+#
+#   webclient: A web client. Requires web_client_location to be set.
+#
+listeners:
+  # TLS-enabled listener: for when matrix traffic is sent directly to synapse.
+  #
+  # Disabled by default. To enable it, uncomment the following. (Note that you
+  # will also need to give Synapse a TLS key and certificate: see the TLS section
+  # below.)
+  #
+  #- port: 8448
+  #  type: http
+  #  tls: true
+  #  resources:
+  #    - names: [client, federation]
+
+  # Unsecure HTTP listener: for when matrix traffic passes through a reverse proxy
+  # that unwraps TLS.
+  #
+  # If you plan to use a reverse proxy, please see
+  # https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md.
+  #
+  - port: 8008
+    tls: false
+    type: http
+    x_forwarded: true
+    bind_addresses: ['::1', '127.0.0.1']
+
+    resources:
+      - names: [client, federation]
+        compress: false
+
+    # example additional_resources:
+    #
+    #additional_resources:
+    #  "/_matrix/my/custom/endpoint":
+    #    module: my_module.CustomRequestHandler
+    #    config: {}
+
+  # Turn on the twisted ssh manhole service on localhost on the given
+  # port.
+  #
+  #- port: 9000
+  #  bind_addresses: ['::1', '127.0.0.1']
+  #  type: manhole
+
+# Forward extremities can build up in a room due to networking delays between
+# homeservers. Once this happens in a large room, calculation of the state of
+# that room can become quite expensive. To mitigate this, once the number of
+# forward extremities reaches a given threshold, Synapse will send an
+# org.matrix.dummy_event event, which will reduce the forward extremities
+# in the room.
+#
+# This setting defines the threshold (i.e. number of forward extremities in the
+# room) at which dummy events are sent. The default value is 10.
+#
+#dummy_events_threshold: 5
+
+
+## Homeserver blocking ##
+
+# How to reach the server admin, used in ResourceLimitError
+#
+#admin_contact: 'mailto:admin@server.com'
+
+# Global blocking
+#
+#hs_disabled: false
+#hs_disabled_message: 'Human readable reason for why the HS is blocked'
+
+# Monthly Active User Blocking
+#
+# Used in cases where the admin or server owner wants to limit to the
+# number of monthly active users.
+#
+# 'limit_usage_by_mau' disables/enables monthly active user blocking. When
+# enabled and a limit is reached the server returns a 'ResourceLimitError'
+# with error type Codes.RESOURCE_LIMIT_EXCEEDED
+#
+# 'max_mau_value' is the hard limit of monthly active users above which
+# the server will start blocking user actions.
+#
+# 'mau_trial_days' is a means to add a grace period for active users. It
+# means that users must be active for this number of days before they
+# can be considered active and guards against the case where lots of users
+# sign up in a short space of time never to return after their initial
+# session.
+#
+# 'mau_limit_alerting' is a means of limiting client side alerting
+# should the mau limit be reached. This is useful for small instances
+# where the admin has 5 mau seats (say) for 5 specific people and no
+# interest increasing the mau limit further. Defaults to True, which
+# means that alerting is enabled
+#
+#limit_usage_by_mau: false
+#max_mau_value: 50
+#mau_trial_days: 2
+#mau_limit_alerting: false
+
+# If enabled, the metrics for the number of monthly active users will
+# be populated, however no one will be limited. If limit_usage_by_mau
+# is true, this is implied to be true.
+#
+#mau_stats_only: false
+
+# Sometimes the server admin will want to ensure certain accounts are
+# never blocked by mau checking. These accounts are specified here.
+#
+#mau_limit_reserved_threepids:
+#  - medium: 'email'
+#    address: 'reserved_user@example.com'
+
+# Used by phonehome stats to group together related servers.
+#server_context: context
+
+# Resource-constrained homeserver settings
+#
+# When this is enabled, the room "complexity" will be checked before a user
+# joins a new remote room. If it is above the complexity limit, the server will
+# disallow joining, or will instantly leave.
+#
+# Room complexity is an arbitrary measure based on factors such as the number of
+# users in the room.
+#
+limit_remote_rooms:
+  # Uncomment to enable room complexity checking.
+  #
+  #enabled: true
+
+  # the limit above which rooms cannot be joined. The default is 1.0.
+  #
+  #complexity: 0.5
+
+  # override the error which is returned when the room is too complex.
+  #
+  #complexity_error: "This room is too complex."
+
+  # allow server admins to join complex rooms. Default is false.
+  #
+  #admins_can_join: true
+
+# Whether to require a user to be in the room to add an alias to it.
+# Defaults to 'true'.
+#
+#require_membership_for_aliases: false
+
+# Whether to allow per-room membership profiles through the send of membership
+# events with profile information that differ from the target's global profile.
+# Defaults to 'true'.
+#
+#allow_per_room_profiles: false
+
+# How long to keep redacted events in unredacted form in the database. After
+# this period redacted events get replaced with their redacted form in the DB.
+#
+# Defaults to `7d`. Set to `null` to disable.
+#
+#redaction_retention_period: 28d
+
+# How long to track users' last seen time and IPs in the database.
+#
+# Defaults to `28d`. Set to `null` to disable clearing out of old rows.
+#
+#user_ips_max_age: 14d
+
+# Message retention policy at the server level.
+#
+# Room admins and mods can define a retention period for their rooms using the
+# 'm.room.retention' state event, and server admins can cap this period by setting
+# the 'allowed_lifetime_min' and 'allowed_lifetime_max' config options.
+#
+# If this feature is enabled, Synapse will regularly look for and purge events
+# which are older than the room's maximum retention period. Synapse will also
+# filter events received over federation so that events that should have been
+# purged are ignored and not stored again.
+#
+retention:
+  # The message retention policies feature is disabled by default. Uncomment the
+  # following line to enable it.
+  #
+  #enabled: true
+
+  # Default retention policy. If set, Synapse will apply it to rooms that lack the
+  # 'm.room.retention' state event. Currently, the value of 'min_lifetime' doesn't
+  # matter much because Synapse doesn't take it into account yet.
+  #
+  #default_policy:
+  #  min_lifetime: 1d
+  #  max_lifetime: 1y
+
+  # Retention policy limits. If set, and the state of a room contains a
+  # 'm.room.retention' event in its state which contains a 'min_lifetime' or a
+  # 'max_lifetime' that's out of these bounds, Synapse will cap the room's policy
+  # to these limits when running purge jobs.
+  #
+  #allowed_lifetime_min: 1d
+  #allowed_lifetime_max: 1y
+
+  # Server admins can define the settings of the background jobs purging the
+  # events which lifetime has expired under the 'purge_jobs' section.
+  #
+  # If no configuration is provided, a single job will be set up to delete expired
+  # events in every room daily.
+  #
+  # Each job's configuration defines which range of message lifetimes the job
+  # takes care of. For example, if 'shortest_max_lifetime' is '2d' and
+  # 'longest_max_lifetime' is '3d', the job will handle purging expired events in
+  # rooms whose state defines a 'max_lifetime' that's both higher than 2 days, and
+  # lower than or equal to 3 days. Both the minimum and the maximum value of a
+  # range are optional, e.g. a job with no 'shortest_max_lifetime' and a
+  # 'longest_max_lifetime' of '3d' will handle every room with a retention policy
+  # which 'max_lifetime' is lower than or equal to three days.
+  #
+  # The rationale for this per-job configuration is that some rooms might have a
+  # retention policy with a low 'max_lifetime', where history needs to be purged
+  # of outdated messages on a more frequent basis than for the rest of the rooms
+  # (e.g. every 12h), but not want that purge to be performed by a job that's
+  # iterating over every room it knows, which could be heavy on the server.
+  #
+  # If any purge job is configured, it is strongly recommended to have at least
+  # a single job with neither 'shortest_max_lifetime' nor 'longest_max_lifetime'
+  # set, or one job without 'shortest_max_lifetime' and one job without
+  # 'longest_max_lifetime' set. Otherwise some rooms might be ignored, even if
+  # 'allowed_lifetime_min' and 'allowed_lifetime_max' are set, because capping a
+  # room's policy to these values is done after the policies are retrieved from
+  # Synapse's database (which is done using the range specified in a purge job's
+  # configuration).
+  #
+  #purge_jobs:
+  #  - longest_max_lifetime: 3d
+  #    interval: 12h
+  #  - shortest_max_lifetime: 3d
+  #    interval: 1d
+
+# Inhibits the /requestToken endpoints from returning an error that might leak
+# information about whether an e-mail address is in use or not on this
+# homeserver.
+# Note that for some endpoints the error situation is the e-mail already being
+# used, and for others the error is entering the e-mail being unused.
+# If this option is enabled, instead of returning an error, these endpoints will
+# act as if no error happened and return a fake session ID ('sid') to clients.
+#
+#request_token_inhibit_3pid_errors: true
+
+# A list of domains that the domain portion of 'next_link' parameters
+# must match.
+#
+# This parameter is optionally provided by clients while requesting
+# validation of an email or phone number, and maps to a link that
+# users will be automatically redirected to after validation
+# succeeds. Clients can make use this parameter to aid the validation
+# process.
+#
+# The whitelist is applied whether the homeserver or an
+# identity server is handling validation.
+#
+# The default value is no whitelist functionality; all domains are
+# allowed. Setting this value to an empty list will instead disallow
+# all domains.
+#
+#next_link_domain_whitelist: ["matrix.org"]
+
+
+## TLS ##
+
+# PEM-encoded X509 certificate for TLS.
+# This certificate, as of Synapse 1.0, will need to be a valid and verifiable
+# certificate, signed by a recognised Certificate Authority.
+#
+# See 'ACME support' below to enable auto-provisioning this certificate via
+# Let's Encrypt.
+#
+# If supplying your own, be sure to use a `.pem` file that includes the
+# full certificate chain including any intermediate certificates (for
+# instance, if using certbot, use `fullchain.pem` as your certificate,
+# not `cert.pem`).
+#
+#tls_certificate_path: "CONFDIR/SERVERNAME.tls.crt"
+
+# PEM-encoded private key for TLS
+#
+#tls_private_key_path: "CONFDIR/SERVERNAME.tls.key"
+
+# Whether to verify TLS server certificates for outbound federation requests.
+#
+# Defaults to `true`. To disable certificate verification, uncomment the
+# following line.
+#
+#federation_verify_certificates: false
+
+# The minimum TLS version that will be used for outbound federation requests.
+#
+# Defaults to `1`. Configurable to `1`, `1.1`, `1.2`, or `1.3`. Note
+# that setting this value higher than `1.2` will prevent federation to most
+# of the public Matrix network: only configure it to `1.3` if you have an
+# entirely private federation setup and you can ensure TLS 1.3 support.
+#
+#federation_client_minimum_tls_version: 1.2
+
+# Skip federation certificate verification on the following whitelist
+# of domains.
+#
+# This setting should only be used in very specific cases, such as
+# federation over Tor hidden services and similar. For private networks
+# of homeservers, you likely want to use a private CA instead.
+#
+# Only effective if federation_verify_certicates is `true`.
+#
+#federation_certificate_verification_whitelist:
+#  - lon.example.com
+#  - *.domain.com
+#  - *.onion
+
+# List of custom certificate authorities for federation traffic.
+#
+# This setting should only normally be used within a private network of
+# homeservers.
+#
+# Note that this list will replace those that are provided by your
+# operating environment. Certificates must be in PEM format.
+#
+#federation_custom_ca_list:
+#  - myCA1.pem
+#  - myCA2.pem
+#  - myCA3.pem
+
+# ACME support: This will configure Synapse to request a valid TLS certificate
+# for your configured `server_name` via Let's Encrypt.
+#
+# Note that ACME v1 is now deprecated, and Synapse currently doesn't support
+# ACME v2. This means that this feature currently won't work with installs set
+# up after November 2019. For more info, and alternative solutions, see
+# https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
+#
+# Note that provisioning a certificate in this way requires port 80 to be
+# routed to Synapse so that it can complete the http-01 ACME challenge.
+# By default, if you enable ACME support, Synapse will attempt to listen on
+# port 80 for incoming http-01 challenges - however, this will likely fail
+# with 'Permission denied' or a similar error.
+#
+# There are a couple of potential solutions to this:
+#
+#  * If you already have an Apache, Nginx, or similar listening on port 80,
+#    you can configure Synapse to use an alternate port, and have your web
+#    server forward the requests. For example, assuming you set 'port: 8009'
+#    below, on Apache, you would write:
+#
+#    ProxyPass /.well-known/acme-challenge http://localhost:8009/.well-known/acme-challenge
+#
+#  * Alternatively, you can use something like `authbind` to give Synapse
+#    permission to listen on port 80.
+#
+acme:
+    # ACME support is disabled by default. Set this to `true` and uncomment
+    # tls_certificate_path and tls_private_key_path above to enable it.
+    #
+    enabled: false
+
+    # Endpoint to use to request certificates. If you only want to test,
+    # use Let's Encrypt's staging url:
+    #     https://acme-staging.api.letsencrypt.org/directory
+    #
+    #url: https://acme-v01.api.letsencrypt.org/directory
+
+    # Port number to listen on for the HTTP-01 challenge. Change this if
+    # you are forwarding connections through Apache/Nginx/etc.
+    #
+    port: 80
+
+    # Local addresses to listen on for incoming connections.
+    # Again, you may want to change this if you are forwarding connections
+    # through Apache/Nginx/etc.
+    #
+    bind_addresses: ['::', '0.0.0.0']
+
+    # How many days remaining on a certificate before it is renewed.
+    #
+    reprovision_threshold: 30
+
+    # The domain that the certificate should be for. Normally this
+    # should be the same as your Matrix domain (i.e., 'server_name'), but,
+    # by putting a file at 'https://<server_name>/.well-known/matrix/server',
+    # you can delegate incoming traffic to another server. If you do that,
+    # you should give the target of the delegation here.
+    #
+    # For example: if your 'server_name' is 'example.com', but
+    # 'https://example.com/.well-known/matrix/server' delegates to
+    # 'matrix.example.com', you should put 'matrix.example.com' here.
+    #
+    # If not set, defaults to your 'server_name'.
+    #
+    domain: matrix.example.com
+
+    # file to use for the account key. This will be generated if it doesn't
+    # exist.
+    #
+    # If unspecified, we will use CONFDIR/client.key.
+    #
+    account_key_file: DATADIR/acme_account.key
+
+
+## Federation ##
+
+# Restrict federation to the following whitelist of domains.
+# N.B. we recommend also firewalling your federation listener to limit
+# inbound federation traffic as early as possible, rather than relying
+# purely on this application-layer restriction.  If not specified, the
+# default is to whitelist everything.
+#
+#federation_domain_whitelist:
+#  - lon.example.com
+#  - nyc.example.com
+#  - syd.example.com
+
+# Report prometheus metrics on the age of PDUs being sent to and received from
+# the following domains. This can be used to give an idea of "delay" on inbound
+# and outbound federation, though be aware that any delay can be due to problems
+# at either end or with the intermediate network.
+#
+# By default, no domains are monitored in this way.
+#
+#federation_metrics_domains:
+#  - matrix.org
+#  - example.com
+
+# Uncomment to disable profile lookup over federation. By default, the
+# Federation API allows other homeservers to obtain profile data of any user
+# on this homeserver. Defaults to 'true'.
+#
+#allow_profile_lookup_over_federation: false
+
+# Uncomment to disable device display name lookup over federation. By default, the
+# Federation API allows other homeservers to obtain device display names of any user
+# on this homeserver. Defaults to 'true'.
+#
+#allow_device_name_lookup_over_federation: false
+
+
+## Caching ##
+
+# Caching can be configured through the following options.
+#
+# A cache 'factor' is a multiplier that can be applied to each of
+# Synapse's caches in order to increase or decrease the maximum
+# number of entries that can be stored.
+
+# The number of events to cache in memory. Not affected by
+# caches.global_factor.
+#
+#event_cache_size: 10K
+
+caches:
+   # Controls the global cache factor, which is the default cache factor
+   # for all caches if a specific factor for that cache is not otherwise
+   # set.
+   #
+   # This can also be set by the "SYNAPSE_CACHE_FACTOR" environment
+   # variable. Setting by environment variable takes priority over
+   # setting through the config file.
+   #
+   # Defaults to 0.5, which will half the size of all caches.
+   #
+   #global_factor: 1.0
+
+   # A dictionary of cache name to cache factor for that individual
+   # cache. Overrides the global cache factor for a given cache.
+   #
+   # These can also be set through environment variables comprised
+   # of "SYNAPSE_CACHE_FACTOR_" + the name of the cache in capital
+   # letters and underscores. Setting by environment variable
+   # takes priority over setting through the config file.
+   # Ex. SYNAPSE_CACHE_FACTOR_GET_USERS_WHO_SHARE_ROOM_WITH_USER=2.0
+   #
+   # Some caches have '*' and other characters that are not
+   # alphanumeric or underscores. These caches can be named with or
+   # without the special characters stripped. For example, to specify
+   # the cache factor for `*stateGroupCache*` via an environment
+   # variable would be `SYNAPSE_CACHE_FACTOR_STATEGROUPCACHE=2.0`.
+   #
+   per_cache_factors:
+     #get_users_who_share_room_with_user: 2.0
+
+
+## Database ##
+
+# The 'database' setting defines the database that synapse uses to store all of
+# its data.
+#
+# 'name' gives the database engine to use: either 'sqlite3' (for SQLite) or
+# 'psycopg2' (for PostgreSQL).
+#
+# 'args' gives options which are passed through to the database engine,
+# except for options starting 'cp_', which are used to configure the Twisted
+# connection pool. For a reference to valid arguments, see:
+#   * for sqlite: https://docs.python.org/3/library/sqlite3.html#sqlite3.connect
+#   * for postgres: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS
+#   * for the connection pool: https://twistedmatrix.com/documents/current/api/twisted.enterprise.adbapi.ConnectionPool.html#__init__
+#
+#
+# Example SQLite configuration:
+#
+#database:
+#  name: sqlite3
+#  args:
+#    database: /path/to/homeserver.db
+#
+#
+# Example Postgres configuration:
+#
+#database:
+#  name: psycopg2
+#  args:
+#    user: synapse_user
+#    password: secretpassword
+#    database: synapse
+#    host: localhost
+#    port: 5432
+#    cp_min: 5
+#    cp_max: 10
+#
+# For more information on using Synapse with Postgres, see `docs/postgres.md`.
+#
+database:
+  name: sqlite3
+  args:
+    database: DATADIR/homeserver.db
+
+
+## Logging ##
+
+# A yaml python logging config file as described by
+# https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
+#
+log_config: "CONFDIR/SERVERNAME.log.config"
+
+
+## Ratelimiting ##
+
+# Ratelimiting settings for client actions (registration, login, messaging).
+#
+# Each ratelimiting configuration is made of two parameters:
+#   - per_second: number of requests a client can send per second.
+#   - burst_count: number of requests a client can send before being throttled.
+#
+# Synapse currently uses the following configurations:
+#   - one for messages that ratelimits sending based on the account the client
+#     is using
+#   - one for registration that ratelimits registration requests based on the
+#     client's IP address.
+#   - one for login that ratelimits login requests based on the client's IP
+#     address.
+#   - one for login that ratelimits login requests based on the account the
+#     client is attempting to log into.
+#   - one for login that ratelimits login requests based on the account the
+#     client is attempting to log into, based on the amount of failed login
+#     attempts for this account.
+#   - one for ratelimiting redactions by room admins. If this is not explicitly
+#     set then it uses the same ratelimiting as per rc_message. This is useful
+#     to allow room admins to deal with abuse quickly.
+#   - two for ratelimiting number of rooms a user can join, "local" for when
+#     users are joining rooms the server is already in (this is cheap) vs
+#     "remote" for when users are trying to join rooms not on the server (which
+#     can be more expensive)
+#   - one for ratelimiting how often a user or IP can attempt to validate a 3PID.
+#   - two for ratelimiting how often invites can be sent in a room or to a
+#     specific user.
+#
+# The defaults are as shown below.
+#
+#rc_message:
+#  per_second: 0.2
+#  burst_count: 10
+#
+#rc_registration:
+#  per_second: 0.17
+#  burst_count: 3
+#
+#rc_login:
+#  address:
+#    per_second: 0.17
+#    burst_count: 3
+#  account:
+#    per_second: 0.17
+#    burst_count: 3
+#  failed_attempts:
+#    per_second: 0.17
+#    burst_count: 3
+#
+#rc_admin_redaction:
+#  per_second: 1
+#  burst_count: 50
+#
+#rc_joins:
+#  local:
+#    per_second: 0.1
+#    burst_count: 10
+#  remote:
+#    per_second: 0.01
+#    burst_count: 10
+#
+#rc_3pid_validation:
+#  per_second: 0.003
+#  burst_count: 5
+#
+#rc_invites:
+#  per_room:
+#    per_second: 0.3
+#    burst_count: 10
+#  per_user:
+#    per_second: 0.003
+#    burst_count: 5
+
+# Ratelimiting settings for incoming federation
+#
+# The rc_federation configuration is made up of the following settings:
+#   - window_size: window size in milliseconds
+#   - sleep_limit: number of federation requests from a single server in
+#     a window before the server will delay processing the request.
+#   - sleep_delay: duration in milliseconds to delay processing events
+#     from remote servers by if they go over the sleep limit.
+#   - reject_limit: maximum number of concurrent federation requests
+#     allowed from a single server
+#   - concurrent: number of federation requests to concurrently process
+#     from a single server
+#
+# The defaults are as shown below.
+#
+#rc_federation:
+#  window_size: 1000
+#  sleep_limit: 10
+#  sleep_delay: 500
+#  reject_limit: 50
+#  concurrent: 3
+
+# Target outgoing federation transaction frequency for sending read-receipts,
+# per-room.
+#
+# If we end up trying to send out more read-receipts, they will get buffered up
+# into fewer transactions.
+#
+#federation_rr_transactions_per_room_per_second: 50
+
+
+
+## Media Store ##
+
+# Enable the media store service in the Synapse master. Uncomment the
+# following if you are using a separate media store worker.
+#
+#enable_media_repo: false
+
+# Directory where uploaded images and attachments are stored.
+#
+media_store_path: "DATADIR/media_store"
+
+# Media storage providers allow media to be stored in different
+# locations.
+#
+#media_storage_providers:
+#  - module: file_system
+#    # Whether to store newly uploaded local files
+#    store_local: false
+#    # Whether to store newly downloaded remote files
+#    store_remote: false
+#    # Whether to wait for successful storage for local uploads
+#    store_synchronous: false
+#    config:
+#       directory: /mnt/some/other/directory
+
+# The largest allowed upload size in bytes
+#
+#max_upload_size: 50M
+
+# Maximum number of pixels that will be thumbnailed
+#
+#max_image_pixels: 32M
+
+# Whether to generate new thumbnails on the fly to precisely match
+# the resolution requested by the client. If true then whenever
+# a new resolution is requested by the client the server will
+# generate a new thumbnail. If false the server will pick a thumbnail
+# from a precalculated list.
+#
+#dynamic_thumbnails: false
+
+# List of thumbnails to precalculate when an image is uploaded.
+#
+#thumbnail_sizes:
+#  - width: 32
+#    height: 32
+#    method: crop
+#  - width: 96
+#    height: 96
+#    method: crop
+#  - width: 320
+#    height: 240
+#    method: scale
+#  - width: 640
+#    height: 480
+#    method: scale
+#  - width: 800
+#    height: 600
+#    method: scale
+
+# Is the preview URL API enabled?
+#
+# 'false' by default: uncomment the following to enable it (and specify a
+# url_preview_ip_range_blacklist blacklist).
+#
+#url_preview_enabled: true
+
+# List of IP address CIDR ranges that the URL preview spider is denied
+# from accessing.  There are no defaults: you must explicitly
+# specify a list for URL previewing to work.  You should specify any
+# internal services in your network that you do not want synapse to try
+# to connect to, otherwise anyone in any Matrix room could cause your
+# synapse to issue arbitrary GET requests to your internal services,
+# causing serious security issues.
+#
+# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
+# listed here, since they correspond to unroutable addresses.)
+#
+# This must be specified if url_preview_enabled is set. It is recommended that
+# you uncomment the following list as a starting point.
+#
+#url_preview_ip_range_blacklist:
+#  - '127.0.0.0/8'
+#  - '10.0.0.0/8'
+#  - '172.16.0.0/12'
+#  - '192.168.0.0/16'
+#  - '100.64.0.0/10'
+#  - '192.0.0.0/24'
+#  - '169.254.0.0/16'
+#  - '192.88.99.0/24'
+#  - '198.18.0.0/15'
+#  - '192.0.2.0/24'
+#  - '198.51.100.0/24'
+#  - '203.0.113.0/24'
+#  - '224.0.0.0/4'
+#  - '::1/128'
+#  - 'fe80::/10'
+#  - 'fc00::/7'
+#  - '2001:db8::/32'
+#  - 'ff00::/8'
+#  - 'fec0::/10'
+
+# List of IP address CIDR ranges that the URL preview spider is allowed
+# to access even if they are specified in url_preview_ip_range_blacklist.
+# This is useful for specifying exceptions to wide-ranging blacklisted
+# target IP ranges - e.g. for enabling URL previews for a specific private
+# website only visible in your network.
+#
+#url_preview_ip_range_whitelist:
+#   - '192.168.1.1'
+
+# Optional list of URL matches that the URL preview spider is
+# denied from accessing.  You should use url_preview_ip_range_blacklist
+# in preference to this, otherwise someone could define a public DNS
+# entry that points to a private IP address and circumvent the blacklist.
+# This is more useful if you know there is an entire shape of URL that
+# you know that will never want synapse to try to spider.
+#
+# Each list entry is a dictionary of url component attributes as returned
+# by urlparse.urlsplit as applied to the absolute form of the URL.  See
+# https://docs.python.org/2/library/urlparse.html#urlparse.urlsplit
+# The values of the dictionary are treated as an filename match pattern
+# applied to that component of URLs, unless they start with a ^ in which
+# case they are treated as a regular expression match.  If all the
+# specified component matches for a given list item succeed, the URL is
+# blacklisted.
+#
+#url_preview_url_blacklist:
+#  # blacklist any URL with a username in its URI
+#  - username: '*'
+#
+#  # blacklist all *.google.com URLs
+#  - netloc: 'google.com'
+#  - netloc: '*.google.com'
+#
+#  # blacklist all plain HTTP URLs
+#  - scheme: 'http'
+#
+#  # blacklist http(s)://www.acme.com/foo
+#  - netloc: 'www.acme.com'
+#    path: '/foo'
+#
+#  # blacklist any URL with a literal IPv4 address
+#  - netloc: '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$'
+
+# The largest allowed URL preview spidering size in bytes
+#
+#max_spider_size: 10M
+
+# A list of values for the Accept-Language HTTP header used when
+# downloading webpages during URL preview generation. This allows
+# Synapse to specify the preferred languages that URL previews should
+# be in when communicating with remote servers.
+#
+# Each value is a IETF language tag; a 2-3 letter identifier for a
+# language, optionally followed by subtags separated by '-', specifying
+# a country or region variant.
+#
+# Multiple values can be provided, and a weight can be added to each by
+# using quality value syntax (;q=). '*' translates to any language.
+#
+# Defaults to "en".
+#
+# Example:
+#
+# url_preview_accept_language:
+#   - en-UK
+#   - en-US;q=0.9
+#   - fr;q=0.8
+#   - *;q=0.7
+#
+url_preview_accept_language:
+#   - en
+
+
+## Captcha ##
+# See docs/CAPTCHA_SETUP.md for full details of configuring this.
+
+# This homeserver's ReCAPTCHA public key. Must be specified if
+# enable_registration_captcha is enabled.
+#
+#recaptcha_public_key: "YOUR_PUBLIC_KEY"
+
+# This homeserver's ReCAPTCHA private key. Must be specified if
+# enable_registration_captcha is enabled.
+#
+#recaptcha_private_key: "YOUR_PRIVATE_KEY"
+
+# Uncomment to enable ReCaptcha checks when registering, preventing signup
+# unless a captcha is answered. Requires a valid ReCaptcha
+# public/private key. Defaults to 'false'.
+#
+#enable_registration_captcha: true
+
+# The API endpoint to use for verifying m.login.recaptcha responses.
+# Defaults to "https://www.recaptcha.net/recaptcha/api/siteverify".
+#
+#recaptcha_siteverify_api: "https://my.recaptcha.site"
+
+
+## TURN ##
+
+# The public URIs of the TURN server to give to clients
+#
+#turn_uris: []
+
+# The shared secret used to compute passwords for the TURN server
+#
+#turn_shared_secret: "YOUR_SHARED_SECRET"
+
+# The Username and password if the TURN server needs them and
+# does not use a token
+#
+#turn_username: "TURNSERVER_USERNAME"
+#turn_password: "TURNSERVER_PASSWORD"
+
+# How long generated TURN credentials last
+#
+#turn_user_lifetime: 1h
+
+# Whether guests should be allowed to use the TURN server.
+# This defaults to True, otherwise VoIP will be unreliable for guests.
+# However, it does introduce a slight security risk as it allows users to
+# connect to arbitrary endpoints without having first signed up for a
+# valid account (e.g. by passing a CAPTCHA).
+#
+#turn_allow_guests: true
+
+
+## Registration ##
+#
+# Registration can be rate-limited using the parameters in the "Ratelimiting"
+# section of this file.
+
+# Enable registration for new users.
+#
+#enable_registration: false
+
+# Time that a user's session remains valid for, after they log in.
+#
+# Note that this is not currently compatible with guest logins.
+#
+# Note also that this is calculated at login time: changes are not applied
+# retrospectively to users who have already logged in.
+#
+# By default, this is infinite.
+#
+#session_lifetime: 24h
+
+# The user must provide all of the below types of 3PID when registering.
+#
+#registrations_require_3pid:
+#  - email
+#  - msisdn
+
+# Explicitly disable asking for MSISDNs from the registration
+# flow (overrides registrations_require_3pid if MSISDNs are set as required)
+#
+#disable_msisdn_registration: true
+
+# Mandate that users are only allowed to associate certain formats of
+# 3PIDs with accounts on this server.
+#
+#allowed_local_3pids:
+#  - medium: email
+#    pattern: '^[^@]+@matrix\.org$'
+#  - medium: email
+#    pattern: '^[^@]+@vector\.im$'
+#  - medium: msisdn
+#    pattern: '\+44'
+
+# Enable 3PIDs lookup requests to identity servers from this server.
+#
+#enable_3pid_lookup: true
+
+# If set, allows registration of standard or admin accounts by anyone who
+# has the shared secret, even if registration is otherwise disabled.
+#
+#registration_shared_secret: <PRIVATE STRING>
+
+# Set the number of bcrypt rounds used to generate password hash.
+# Larger numbers increase the work factor needed to generate the hash.
+# The default number is 12 (which equates to 2^12 rounds).
+# N.B. that increasing this will exponentially increase the time required
+# to register or login - e.g. 24 => 2^24 rounds which will take >20 mins.
+#
+#bcrypt_rounds: 12
+
+# Allows users to register as guests without a password/email/etc, and
+# participate in rooms hosted on this server which have been made
+# accessible to anonymous users.
+#
+#allow_guest_access: false
+
+# The identity server which we suggest that clients should use when users log
+# in on this server.
+#
+# (By default, no suggestion is made, so it is left up to the client.
+# This setting is ignored unless public_baseurl is also set.)
+#
+#default_identity_server: https://matrix.org
+
+# Handle threepid (email/phone etc) registration and password resets through a set of
+# *trusted* identity servers. Note that this allows the configured identity server to
+# reset passwords for accounts!
+#
+# Be aware that if `email` is not set, and SMTP options have not been
+# configured in the email config block, registration and user password resets via
+# email will be globally disabled.
+#
+# Additionally, if `msisdn` is not set, registration and password resets via msisdn
+# will be disabled regardless, and users will not be able to associate an msisdn
+# identifier to their account. This is due to Synapse currently not supporting
+# any method of sending SMS messages on its own.
+#
+# To enable using an identity server for operations regarding a particular third-party
+# identifier type, set the value to the URL of that identity server as shown in the
+# examples below.
+#
+# Servers handling the these requests must answer the `/requestToken` endpoints defined
+# by the Matrix Identity Service API specification:
+# https://matrix.org/docs/spec/identity_service/latest
+#
+# If a delegate is specified, the config option public_baseurl must also be filled out.
+#
+account_threepid_delegates:
+    #email: https://example.com     # Delegate email sending to example.com
+    #msisdn: http://localhost:8090  # Delegate SMS sending to this local process
+
+# Whether users are allowed to change their displayname after it has
+# been initially set. Useful when provisioning users based on the
+# contents of a third-party directory.
+#
+# Does not apply to server administrators. Defaults to 'true'
+#
+#enable_set_displayname: false
+
+# Whether users are allowed to change their avatar after it has been
+# initially set. Useful when provisioning users based on the contents
+# of a third-party directory.
+#
+# Does not apply to server administrators. Defaults to 'true'
+#
+#enable_set_avatar_url: false
+
+# Whether users can change the 3PIDs associated with their accounts
+# (email address and msisdn).
+#
+# Defaults to 'true'
+#
+#enable_3pid_changes: false
+
+# Users who register on this homeserver will automatically be joined
+# to these rooms.
+#
+# By default, any room aliases included in this list will be created
+# as a publicly joinable room when the first user registers for the
+# homeserver. This behaviour can be customised with the settings below.
+# If the room already exists, make certain it is a publicly joinable
+# room. The join rule of the room must be set to 'public'.
+#
+#auto_join_rooms:
+#  - "#example:example.com"
+
+# Where auto_join_rooms are specified, setting this flag ensures that the
+# the rooms exist by creating them when the first user on the
+# homeserver registers.
+#
+# By default the auto-created rooms are publicly joinable from any federated
+# server. Use the autocreate_auto_join_rooms_federated and
+# autocreate_auto_join_room_preset settings below to customise this behaviour.
+#
+# Setting to false means that if the rooms are not manually created,
+# users cannot be auto-joined since they do not exist.
+#
+# Defaults to true. Uncomment the following line to disable automatically
+# creating auto-join rooms.
+#
+#autocreate_auto_join_rooms: false
+
+# Whether the auto_join_rooms that are auto-created are available via
+# federation. Only has an effect if autocreate_auto_join_rooms is true.
+#
+# Note that whether a room is federated cannot be modified after
+# creation.
+#
+# Defaults to true: the room will be joinable from other servers.
+# Uncomment the following to prevent users from other homeservers from
+# joining these rooms.
+#
+#autocreate_auto_join_rooms_federated: false
+
+# The room preset to use when auto-creating one of auto_join_rooms. Only has an
+# effect if autocreate_auto_join_rooms is true.
+#
+# This can be one of "public_chat", "private_chat", or "trusted_private_chat".
+# If a value of "private_chat" or "trusted_private_chat" is used then
+# auto_join_mxid_localpart must also be configured.
+#
+# Defaults to "public_chat", meaning that the room is joinable by anyone, including
+# federated servers if autocreate_auto_join_rooms_federated is true (the default).
+# Uncomment the following to require an invitation to join these rooms.
+#
+#autocreate_auto_join_room_preset: private_chat
+
+# The local part of the user id which is used to create auto_join_rooms if
+# autocreate_auto_join_rooms is true. If this is not provided then the
+# initial user account that registers will be used to create the rooms.
+#
+# The user id is also used to invite new users to any auto-join rooms which
+# are set to invite-only.
+#
+# It *must* be configured if autocreate_auto_join_room_preset is set to
+# "private_chat" or "trusted_private_chat".
+#
+# Note that this must be specified in order for new users to be correctly
+# invited to any auto-join rooms which have been set to invite-only (either
+# at the time of creation or subsequently).
+#
+# Note that, if the room already exists, this user must be joined and
+# have the appropriate permissions to invite new members.
+#
+#auto_join_mxid_localpart: system
+
+# When auto_join_rooms is specified, setting this flag to false prevents
+# guest accounts from being automatically joined to the rooms.
+#
+# Defaults to true.
+#
+#auto_join_rooms_for_guests: false
+
+
+## Account Validity ##
+
+# Optional account validity configuration. This allows for accounts to be denied
+# any request after a given period.
+#
+# Once this feature is enabled, Synapse will look for registered users without an
+# expiration date at startup and will add one to every account it found using the
+# current settings at that time.
+# This means that, if a validity period is set, and Synapse is restarted (it will
+# then derive an expiration date from the current validity period), and some time
+# after that the validity period changes and Synapse is restarted, the users'
+# expiration dates won't be updated unless their account is manually renewed. This
+# date will be randomly selected within a range [now + period - d ; now + period],
+# where d is equal to 10% of the validity period.
+#
+account_validity:
+  # The account validity feature is disabled by default. Uncomment the
+  # following line to enable it.
+  #
+  #enabled: true
+
+  # The period after which an account is valid after its registration. When
+  # renewing the account, its validity period will be extended by this amount
+  # of time. This parameter is required when using the account validity
+  # feature.
+  #
+  #period: 6w
+
+  # The amount of time before an account's expiry date at which Synapse will
+  # send an email to the account's email address with a renewal link. By
+  # default, no such emails are sent.
+  #
+  # If you enable this setting, you will also need to fill out the 'email' and
+  # 'public_baseurl' configuration sections.
+  #
+  #renew_at: 1w
+
+  # The subject of the email sent out with the renewal link. '%(app)s' can be
+  # used as a placeholder for the 'app_name' parameter from the 'email'
+  # section.
+  #
+  # Note that the placeholder must be written '%(app)s', including the
+  # trailing 's'.
+  #
+  # If this is not set, a default value is used.
+  #
+  #renew_email_subject: "Renew your %(app)s account"
+
+  # Directory in which Synapse will try to find templates for the HTML files to
+  # serve to the user when trying to renew an account. If not set, default
+  # templates from within the Synapse package will be used.
+  #
+  # The currently available templates are:
+  #
+  # * account_renewed.html: Displayed to the user after they have successfully
+  #       renewed their account.
+  #
+  # * account_previously_renewed.html: Displayed to the user if they attempt to
+  #       renew their account with a token that is valid, but that has already
+  #       been used. In this case the account is not renewed again.
+  #
+  # * invalid_token.html: Displayed to the user when they try to renew an account
+  #       with an unknown or invalid renewal token.
+  #
+  # See https://github.com/matrix-org/synapse/tree/master/synapse/res/templates for
+  # default template contents.
+  #
+  # The file name of some of these templates can be configured below for legacy
+  # reasons.
+  #
+  #template_dir: "res/templates"
+
+  # A custom file name for the 'account_renewed.html' template.
+  #
+  # If not set, the file is assumed to be named "account_renewed.html".
+  #
+  #account_renewed_html_path: "account_renewed.html"
+
+  # A custom file name for the 'invalid_token.html' template.
+  #
+  # If not set, the file is assumed to be named "invalid_token.html".
+  #
+  #invalid_token_html_path: "invalid_token.html"
+
+
+## Metrics ###
+
+# Enable collection and rendering of performance metrics
+#
+#enable_metrics: false
+
+# Enable sentry integration
+# NOTE: While attempts are made to ensure that the logs don't contain
+# any sensitive information, this cannot be guaranteed. By enabling
+# this option the sentry server may therefore receive sensitive
+# information, and it in turn may then diseminate sensitive information
+# through insecure notification channels if so configured.
+#
+#sentry:
+#    dsn: "..."
+
+# Flags to enable Prometheus metrics which are not suitable to be
+# enabled by default, either for performance reasons or limited use.
+#
+metrics_flags:
+    # Publish synapse_federation_known_servers, a gauge of the number of
+    # servers this homeserver knows about, including itself. May cause
+    # performance problems on large homeservers.
+    #
+    #known_servers: true
+
+# Whether or not to report anonymized homeserver usage statistics.
+#
+#report_stats: true|false
+
+# The endpoint to report the anonymized homeserver usage statistics to.
+# Defaults to https://matrix.org/report-usage-stats/push
+#
+#report_stats_endpoint: https://example.com/report-usage-stats/push
+
+
+## API Configuration ##
+
+# Controls for the state that is shared with users who receive an invite
+# to a room
+#
+room_prejoin_state:
+   # By default, the following state event types are shared with users who
+   # receive invites to the room:
+   #
+   # - m.room.join_rules
+   # - m.room.canonical_alias
+   # - m.room.avatar
+   # - m.room.encryption
+   # - m.room.name
+   # - m.room.create
+   #
+   # Uncomment the following to disable these defaults (so that only the event
+   # types listed in 'additional_event_types' are shared). Defaults to 'false'.
+   #
+   #disable_default_event_types: true
+
+   # Additional state event types to share with users when they are invited
+   # to a room.
+   #
+   # By default, this list is empty (so only the default event types are shared).
+   #
+   #additional_event_types:
+   #  - org.example.custom.event.type
+
+
+# A list of application service config files to use
+#
+#app_service_config_files:
+#  - app_service_1.yaml
+#  - app_service_2.yaml
+
+# Uncomment to enable tracking of application service IP addresses. Implicitly
+# enables MAU tracking for application service users.
+#
+#track_appservice_user_ips: true
+
+
+# a secret which is used to sign access tokens. If none is specified,
+# the registration_shared_secret is used, if one is given; otherwise,
+# a secret key is derived from the signing key.
+#
+#macaroon_secret_key: <PRIVATE STRING>
+
+# a secret which is used to calculate HMACs for form values, to stop
+# falsification of values. Must be specified for the User Consent
+# forms to work.
+#
+#form_secret: <PRIVATE STRING>
+
+## Signing Keys ##
+
+# Path to the signing key to sign messages with
+#
+signing_key_path: "CONFDIR/SERVERNAME.signing.key"
+
+# The keys that the server used to sign messages with but won't use
+# to sign new messages.
+#
+old_signing_keys:
+  # For each key, `key` should be the base64-encoded public key, and
+  # `expired_ts`should be the time (in milliseconds since the unix epoch) that
+  # it was last used.
+  #
+  # It is possible to build an entry from an old signing.key file using the
+  # `export_signing_key` script which is provided with synapse.
+  #
+  # For example:
+  #
+  #"ed25519:id": { key: "base64string", expired_ts: 123456789123 }
+
+# How long key response published by this server is valid for.
+# Used to set the valid_until_ts in /key/v2 APIs.
+# Determines how quickly servers will query to check which keys
+# are still valid.
+#
+#key_refresh_interval: 1d
+
+# The trusted servers to download signing keys from.
+#
+# When we need to fetch a signing key, each server is tried in parallel.
+#
+# Normally, the connection to the key server is validated via TLS certificates.
+# Additional security can be provided by configuring a `verify key`, which
+# will make synapse check that the response is signed by that key.
+#
+# This setting supercedes an older setting named `perspectives`. The old format
+# is still supported for backwards-compatibility, but it is deprecated.
+#
+# 'trusted_key_servers' defaults to matrix.org, but using it will generate a
+# warning on start-up. To suppress this warning, set
+# 'suppress_key_server_warning' to true.
+#
+# Options for each entry in the list include:
+#
+#    server_name: the name of the server. required.
+#
+#    verify_keys: an optional map from key id to base64-encoded public key.
+#       If specified, we will check that the response is signed by at least
+#       one of the given keys.
+#
+#    accept_keys_insecurely: a boolean. Normally, if `verify_keys` is unset,
+#       and federation_verify_certificates is not `true`, synapse will refuse
+#       to start, because this would allow anyone who can spoof DNS responses
+#       to masquerade as the trusted key server. If you know what you are doing
+#       and are sure that your network environment provides a secure connection
+#       to the key server, you can set this to `true` to override this
+#       behaviour.
+#
+# An example configuration might look like:
+#
+#trusted_key_servers:
+#  - server_name: "my_trusted_server.example.com"
+#    verify_keys:
+#      "ed25519:auto": "abcdefghijklmnopqrstuvwxyzabcdefghijklmopqr"
+#  - server_name: "my_other_trusted_server.example.com"
+#
+trusted_key_servers:
+  - server_name: "matrix.org"
+
+# Uncomment the following to disable the warning that is emitted when the
+# trusted_key_servers include 'matrix.org'. See above.
+#
+#suppress_key_server_warning: true
+
+# The signing keys to use when acting as a trusted key server. If not specified
+# defaults to the server signing key.
+#
+# Can contain multiple keys, one per line.
+#
+#key_server_signing_keys_path: "key_server_signing_keys.key"
+
+
+## Single sign-on integration ##
+
+# The following settings can be used to make Synapse use a single sign-on
+# provider for authentication, instead of its internal password database.
+#
+# You will probably also want to set the following options to `false` to
+# disable the regular login/registration flows:
+#   * enable_registration
+#   * password_config.enabled
+#
+# You will also want to investigate the settings under the "sso" configuration
+# section below.
+
+# Enable SAML2 for registration and login. Uses pysaml2.
+#
+# At least one of `sp_config` or `config_path` must be set in this section to
+# enable SAML login.
+#
+# Once SAML support is enabled, a metadata file will be exposed at
+# https://<server>:<port>/_synapse/client/saml2/metadata.xml, which you may be able to
+# use to configure your SAML IdP with. Alternatively, you can manually configure
+# the IdP to use an ACS location of
+# https://<server>:<port>/_synapse/client/saml2/authn_response.
+#
+saml2_config:
+  # `sp_config` is the configuration for the pysaml2 Service Provider.
+  # See pysaml2 docs for format of config.
+  #
+  # Default values will be used for the 'entityid' and 'service' settings,
+  # so it is not normally necessary to specify them unless you need to
+  # override them.
+  #
+  sp_config:
+    # Point this to the IdP's metadata. You must provide either a local
+    # file via the `local` attribute or (preferably) a URL via the
+    # `remote` attribute.
+    #
+    #metadata:
+    #  local: ["saml2/idp.xml"]
+    #  remote:
+    #    - url: https://our_idp/metadata.xml
+
+    # Allowed clock difference in seconds between the homeserver and IdP.
+    #
+    # Uncomment the below to increase the accepted time difference from 0 to 3 seconds.
+    #
+    #accepted_time_diff: 3
+
+    # By default, the user has to go to our login page first. If you'd like
+    # to allow IdP-initiated login, set 'allow_unsolicited: true' in a
+    # 'service.sp' section:
+    #
+    #service:
+    #  sp:
+    #    allow_unsolicited: true
+
+    # The examples below are just used to generate our metadata xml, and you
+    # may well not need them, depending on your setup. Alternatively you
+    # may need a whole lot more detail - see the pysaml2 docs!
+
+    #description: ["My awesome SP", "en"]
+    #name: ["Test SP", "en"]
+
+    #ui_info:
+    #  display_name:
+    #    - lang: en
+    #      text: "Display Name is the descriptive name of your service."
+    #  description:
+    #    - lang: en
+    #      text: "Description should be a short paragraph explaining the purpose of the service."
+    #  information_url:
+    #    - lang: en
+    #      text: "https://example.com/terms-of-service"
+    #  privacy_statement_url:
+    #    - lang: en
+    #      text: "https://example.com/privacy-policy"
+    #  keywords:
+    #    - lang: en
+    #      text: ["Matrix", "Element"]
+    #  logo:
+    #    - lang: en
+    #      text: "https://example.com/logo.svg"
+    #      width: "200"
+    #      height: "80"
+
+    #organization:
+    #  name: Example com
+    #  display_name:
+    #    - ["Example co", "en"]
+    #  url: "http://example.com"
+
+    #contact_person:
+    #  - given_name: Bob
+    #    sur_name: "the Sysadmin"
+    #    email_address": ["admin@example.com"]
+    #    contact_type": technical
+
+  # Instead of putting the config inline as above, you can specify a
+  # separate pysaml2 configuration file:
+  #
+  #config_path: "CONFDIR/sp_conf.py"
+
+  # The lifetime of a SAML session. This defines how long a user has to
+  # complete the authentication process, if allow_unsolicited is unset.
+  # The default is 15 minutes.
+  #
+  #saml_session_lifetime: 5m
+
+  # An external module can be provided here as a custom solution to
+  # mapping attributes returned from a saml provider onto a matrix user.
+  #
+  user_mapping_provider:
+    # The custom module's class. Uncomment to use a custom module.
+    #
+    #module: mapping_provider.SamlMappingProvider
+
+    # Custom configuration values for the module. Below options are
+    # intended for the built-in provider, they should be changed if
+    # using a custom module. This section will be passed as a Python
+    # dictionary to the module's `parse_config` method.
+    #
+    config:
+      # The SAML attribute (after mapping via the attribute maps) to use
+      # to derive the Matrix ID from. 'uid' by default.
+      #
+      # Note: This used to be configured by the
+      # saml2_config.mxid_source_attribute option. If that is still
+      # defined, its value will be used instead.
+      #
+      #mxid_source_attribute: displayName
+
+      # The mapping system to use for mapping the saml attribute onto a
+      # matrix ID.
+      #
+      # Options include:
+      #  * 'hexencode' (which maps unpermitted characters to '=xx')
+      #  * 'dotreplace' (which replaces unpermitted characters with
+      #     '.').
+      # The default is 'hexencode'.
+      #
+      # Note: This used to be configured by the
+      # saml2_config.mxid_mapping option. If that is still defined, its
+      # value will be used instead.
+      #
+      #mxid_mapping: dotreplace
+
+  # In previous versions of synapse, the mapping from SAML attribute to
+  # MXID was always calculated dynamically rather than stored in a
+  # table. For backwards- compatibility, we will look for user_ids
+  # matching such a pattern before creating a new account.
+  #
+  # This setting controls the SAML attribute which will be used for this
+  # backwards-compatibility lookup. Typically it should be 'uid', but if
+  # the attribute maps are changed, it may be necessary to change it.
+  #
+  # The default is 'uid'.
+  #
+  #grandfathered_mxid_source_attribute: upn
+
+  # It is possible to configure Synapse to only allow logins if SAML attributes
+  # match particular values. The requirements can be listed under
+  # `attribute_requirements` as shown below. All of the listed attributes must
+  # match for the login to be permitted.
+  #
+  #attribute_requirements:
+  #  - attribute: userGroup
+  #    value: "staff"
+  #  - attribute: department
+  #    value: "sales"
+
+  # If the metadata XML contains multiple IdP entities then the `idp_entityid`
+  # option must be set to the entity to redirect users to.
+  #
+  # Most deployments only have a single IdP entity and so should omit this
+  # option.
+  #
+  #idp_entityid: 'https://our_idp/entityid'
+
+
+# List of OpenID Connect (OIDC) / OAuth 2.0 identity providers, for registration
+# and login.
+#
+# Options for each entry include:
+#
+#   idp_id: a unique identifier for this identity provider. Used internally
+#       by Synapse; should be a single word such as 'github'.
+#
+#       Note that, if this is changed, users authenticating via that provider
+#       will no longer be recognised as the same user!
+#
+#       (Use "oidc" here if you are migrating from an old "oidc_config"
+#       configuration.)
+#
+#   idp_name: A user-facing name for this identity provider, which is used to
+#       offer the user a choice of login mechanisms.
+#
+#   idp_icon: An optional icon for this identity provider, which is presented
+#       by clients and Synapse's own IdP picker page. If given, must be an
+#       MXC URI of the format mxc://<server-name>/<media-id>. (An easy way to
+#       obtain such an MXC URI is to upload an image to an (unencrypted) room
+#       and then copy the "url" from the source of the event.)
+#
+#   idp_brand: An optional brand for this identity provider, allowing clients
+#       to style the login flow according to the identity provider in question.
+#       See the spec for possible options here.
+#
+#   discover: set to 'false' to disable the use of the OIDC discovery mechanism
+#       to discover endpoints. Defaults to true.
+#
+#   issuer: Required. The OIDC issuer. Used to validate tokens and (if discovery
+#       is enabled) to discover the provider's endpoints.
+#
+#   client_id: Required. oauth2 client id to use.
+#
+#   client_secret: oauth2 client secret to use. May be omitted if
+#        client_secret_jwt_key is given, or if client_auth_method is 'none'.
+#
+#   client_secret_jwt_key: Alternative to client_secret: details of a key used
+#      to create a JSON Web Token to be used as an OAuth2 client secret. If
+#      given, must be a dictionary with the following properties:
+#
+#          key: a pem-encoded signing key. Must be a suitable key for the
+#              algorithm specified. Required unless 'key_file' is given.
+#
+#          key_file: the path to file containing a pem-encoded signing key file.
+#              Required unless 'key' is given.
+#
+#          jwt_header: a dictionary giving properties to include in the JWT
+#              header. Must include the key 'alg', giving the algorithm used to
+#              sign the JWT, such as "ES256", using the JWA identifiers in
+#              RFC7518.
+#
+#          jwt_payload: an optional dictionary giving properties to include in
+#              the JWT payload. Normally this should include an 'iss' key.
+#
+#   client_auth_method: auth method to use when exchanging the token. Valid
+#       values are 'client_secret_basic' (default), 'client_secret_post' and
+#       'none'.
+#
+#   scopes: list of scopes to request. This should normally include the "openid"
+#       scope. Defaults to ["openid"].
+#
+#   authorization_endpoint: the oauth2 authorization endpoint. Required if
+#       provider discovery is disabled.
+#
+#   token_endpoint: the oauth2 token endpoint. Required if provider discovery is
+#       disabled.
+#
+#   userinfo_endpoint: the OIDC userinfo endpoint. Required if discovery is
+#       disabled and the 'openid' scope is not requested.
+#
+#   jwks_uri: URI where to fetch the JWKS. Required if discovery is disabled and
+#       the 'openid' scope is used.
+#
+#   skip_verification: set to 'true' to skip metadata verification. Use this if
+#       you are connecting to a provider that is not OpenID Connect compliant.
+#       Defaults to false. Avoid this in production.
+#
+#   user_profile_method: Whether to fetch the user profile from the userinfo
+#       endpoint. Valid values are: 'auto' or 'userinfo_endpoint'.
+#
+#       Defaults to 'auto', which fetches the userinfo endpoint if 'openid' is
+#       included in 'scopes'. Set to 'userinfo_endpoint' to always fetch the
+#       userinfo endpoint.
+#
+#   allow_existing_users: set to 'true' to allow a user logging in via OIDC to
+#       match a pre-existing account instead of failing. This could be used if
+#       switching from password logins to OIDC. Defaults to false.
+#
+#   user_mapping_provider: Configuration for how attributes returned from a OIDC
+#       provider are mapped onto a matrix user. This setting has the following
+#       sub-properties:
+#
+#       module: The class name of a custom mapping module. Default is
+#           'synapse.handlers.oidc.JinjaOidcMappingProvider'.
+#           See https://github.com/matrix-org/synapse/blob/master/docs/sso_mapping_providers.md#openid-mapping-providers
+#           for information on implementing a custom mapping provider.
+#
+#       config: Configuration for the mapping provider module. This section will
+#           be passed as a Python dictionary to the user mapping provider
+#           module's `parse_config` method.
+#
+#           For the default provider, the following settings are available:
+#
+#             subject_claim: name of the claim containing a unique identifier
+#                 for the user. Defaults to 'sub', which OpenID Connect
+#                 compliant providers should provide.
+#
+#             localpart_template: Jinja2 template for the localpart of the MXID.
+#                 If this is not set, the user will be prompted to choose their
+#                 own username (see 'sso_auth_account_details.html' in the 'sso'
+#                 section of this file).
+#
+#             display_name_template: Jinja2 template for the display name to set
+#                 on first login. If unset, no displayname will be set.
+#
+#             email_template: Jinja2 template for the email address of the user.
+#                 If unset, no email address will be added to the account.
+#
+#             extra_attributes: a map of Jinja2 templates for extra attributes
+#                 to send back to the client during login.
+#                 Note that these are non-standard and clients will ignore them
+#                 without modifications.
+#
+#           When rendering, the Jinja2 templates are given a 'user' variable,
+#           which is set to the claims returned by the UserInfo Endpoint and/or
+#           in the ID Token.
+#
+#   It is possible to configure Synapse to only allow logins if certain attributes
+#   match particular values in the OIDC userinfo. The requirements can be listed under
+#   `attribute_requirements` as shown below. All of the listed attributes must
+#   match for the login to be permitted. Additional attributes can be added to
+#   userinfo by expanding the `scopes` section of the OIDC config to retrieve
+#   additional information from the OIDC provider.
+#
+#   If the OIDC claim is a list, then the attribute must match any value in the list.
+#   Otherwise, it must exactly match the value of the claim. Using the example
+#   below, the `family_name` claim MUST be "Stephensson", but the `groups`
+#   claim MUST contain "admin".
+#
+#   attribute_requirements:
+#     - attribute: family_name
+#       value: "Stephensson"
+#     - attribute: groups
+#       value: "admin"
+#
+# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md
+# for information on how to configure these options.
+#
+# For backwards compatibility, it is also possible to configure a single OIDC
+# provider via an 'oidc_config' setting. This is now deprecated and admins are
+# advised to migrate to the 'oidc_providers' format. (When doing that migration,
+# use 'oidc' for the idp_id to ensure that existing users continue to be
+# recognised.)
+#
+oidc_providers:
+  # Generic example
+  #
+  #- idp_id: my_idp
+  #  idp_name: "My OpenID provider"
+  #  idp_icon: "mxc://example.com/mediaid"
+  #  discover: false
+  #  issuer: "https://accounts.example.com/"
+  #  client_id: "provided-by-your-issuer"
+  #  client_secret: "provided-by-your-issuer"
+  #  client_auth_method: client_secret_post
+  #  scopes: ["openid", "profile"]
+  #  authorization_endpoint: "https://accounts.example.com/oauth2/auth"
+  #  token_endpoint: "https://accounts.example.com/oauth2/token"
+  #  userinfo_endpoint: "https://accounts.example.com/userinfo"
+  #  jwks_uri: "https://accounts.example.com/.well-known/jwks.json"
+  #  skip_verification: true
+  #  user_mapping_provider:
+  #    config:
+  #      subject_claim: "id"
+  #      localpart_template: "{{ user.login }}"
+  #      display_name_template: "{{ user.name }}"
+  #      email_template: "{{ user.email }}"
+  #  attribute_requirements:
+  #    - attribute: userGroup
+  #      value: "synapseUsers"
+
+
+# Enable Central Authentication Service (CAS) for registration and login.
+#
+cas_config:
+  # Uncomment the following to enable authorization against a CAS server.
+  # Defaults to false.
+  #
+  #enabled: true
+
+  # The URL of the CAS authorization endpoint.
+  #
+  #server_url: "https://cas-server.com"
+
+  # The attribute of the CAS response to use as the display name.
+  #
+  # If unset, no displayname will be set.
+  #
+  #displayname_attribute: name
+
+  # It is possible to configure Synapse to only allow logins if CAS attributes
+  # match particular values. All of the keys in the mapping below must exist
+  # and the values must match the given value. Alternately if the given value
+  # is None then any value is allowed (the attribute just must exist).
+  # All of the listed attributes must match for the login to be permitted.
+  #
+  #required_attributes:
+  #  userGroup: "staff"
+  #  department: None
+
+
+# Additional settings to use with single-sign on systems such as OpenID Connect,
+# SAML2 and CAS.
+#
+sso:
+    # A list of client URLs which are whitelisted so that the user does not
+    # have to confirm giving access to their account to the URL. Any client
+    # whose URL starts with an entry in the following list will not be subject
+    # to an additional confirmation step after the SSO login is completed.
+    #
+    # WARNING: An entry such as "https://my.client" is insecure, because it
+    # will also match "https://my.client.evil.site", exposing your users to
+    # phishing attacks from evil.site. To avoid this, include a slash after the
+    # hostname: "https://my.client/".
+    #
+    # If public_baseurl is set, then the login fallback page (used by clients
+    # that don't natively support the required login flows) is whitelisted in
+    # addition to any URLs in this list.
+    #
+    # By default, this list is empty.
+    #
+    #client_whitelist:
+    #  - https://riot.im/develop
+    #  - https://my.custom.client/
+
+    # Directory in which Synapse will try to find the template files below.
+    # If not set, or the files named below are not found within the template
+    # directory, default templates from within the Synapse package will be used.
+    #
+    # Synapse will look for the following templates in this directory:
+    #
+    # * HTML page to prompt the user to choose an Identity Provider during
+    #   login: 'sso_login_idp_picker.html'.
+    #
+    #   This is only used if multiple SSO Identity Providers are configured.
+    #
+    #   When rendering, this template is given the following variables:
+    #     * redirect_url: the URL that the user will be redirected to after
+    #       login.
+    #
+    #     * server_name: the homeserver's name.
+    #
+    #     * providers: a list of available Identity Providers. Each element is
+    #       an object with the following attributes:
+    #
+    #         * idp_id: unique identifier for the IdP
+    #         * idp_name: user-facing name for the IdP
+    #         * idp_icon: if specified in the IdP config, an MXC URI for an icon
+    #              for the IdP
+    #         * idp_brand: if specified in the IdP config, a textual identifier
+    #              for the brand of the IdP
+    #
+    #   The rendered HTML page should contain a form which submits its results
+    #   back as a GET request, with the following query parameters:
+    #
+    #     * redirectUrl: the client redirect URI (ie, the `redirect_url` passed
+    #       to the template)
+    #
+    #     * idp: the 'idp_id' of the chosen IDP.
+    #
+    # * HTML page to prompt new users to enter a userid and confirm other
+    #   details: 'sso_auth_account_details.html'. This is only shown if the
+    #   SSO implementation (with any user_mapping_provider) does not return
+    #   a localpart.
+    #
+    #   When rendering, this template is given the following variables:
+    #
+    #     * server_name: the homeserver's name.
+    #
+    #     * idp: details of the SSO Identity Provider that the user logged in
+    #       with: an object with the following attributes:
+    #
+    #         * idp_id: unique identifier for the IdP
+    #         * idp_name: user-facing name for the IdP
+    #         * idp_icon: if specified in the IdP config, an MXC URI for an icon
+    #              for the IdP
+    #         * idp_brand: if specified in the IdP config, a textual identifier
+    #              for the brand of the IdP
+    #
+    #     * user_attributes: an object containing details about the user that
+    #       we received from the IdP. May have the following attributes:
+    #
+    #         * display_name: the user's display_name
+    #         * emails: a list of email addresses
+    #
+    #   The template should render a form which submits the following fields:
+    #
+    #     * username: the localpart of the user's chosen user id
+    #
+    # * HTML page allowing the user to consent to the server's terms and
+    #   conditions. This is only shown for new users, and only if
+    #   `user_consent.require_at_registration` is set.
+    #
+    #   When rendering, this template is given the following variables:
+    #
+    #     * server_name: the homeserver's name.
+    #
+    #     * user_id: the user's matrix proposed ID.
+    #
+    #     * user_profile.display_name: the user's proposed display name, if any.
+    #
+    #     * consent_version: the version of the terms that the user will be
+    #       shown
+    #
+    #     * terms_url: a link to the page showing the terms.
+    #
+    #   The template should render a form which submits the following fields:
+    #
+    #     * accepted_version: the version of the terms accepted by the user
+    #       (ie, 'consent_version' from the input variables).
+    #
+    # * HTML page for a confirmation step before redirecting back to the client
+    #   with the login token: 'sso_redirect_confirm.html'.
+    #
+    #   When rendering, this template is given the following variables:
+    #
+    #     * redirect_url: the URL the user is about to be redirected to.
+    #
+    #     * display_url: the same as `redirect_url`, but with the query
+    #                    parameters stripped. The intention is to have a
+    #                    human-readable URL to show to users, not to use it as
+    #                    the final address to redirect to.
+    #
+    #     * server_name: the homeserver's name.
+    #
+    #     * new_user: a boolean indicating whether this is the user's first time
+    #          logging in.
+    #
+    #     * user_id: the user's matrix ID.
+    #
+    #     * user_profile.avatar_url: an MXC URI for the user's avatar, if any.
+    #           None if the user has not set an avatar.
+    #
+    #     * user_profile.display_name: the user's display name. None if the user
+    #           has not set a display name.
+    #
+    # * HTML page which notifies the user that they are authenticating to confirm
+    #   an operation on their account during the user interactive authentication
+    #   process: 'sso_auth_confirm.html'.
+    #
+    #   When rendering, this template is given the following variables:
+    #     * redirect_url: the URL the user is about to be redirected to.
+    #
+    #     * description: the operation which the user is being asked to confirm
+    #
+    #     * idp: details of the Identity Provider that we will use to confirm
+    #       the user's identity: an object with the following attributes:
+    #
+    #         * idp_id: unique identifier for the IdP
+    #         * idp_name: user-facing name for the IdP
+    #         * idp_icon: if specified in the IdP config, an MXC URI for an icon
+    #              for the IdP
+    #         * idp_brand: if specified in the IdP config, a textual identifier
+    #              for the brand of the IdP
+    #
+    # * HTML page shown after a successful user interactive authentication session:
+    #   'sso_auth_success.html'.
+    #
+    #   Note that this page must include the JavaScript which notifies of a successful authentication
+    #   (see https://matrix.org/docs/spec/client_server/r0.6.0#fallback).
+    #
+    #   This template has no additional variables.
+    #
+    # * HTML page shown after a user-interactive authentication session which
+    #   does not map correctly onto the expected user: 'sso_auth_bad_user.html'.
+    #
+    #   When rendering, this template is given the following variables:
+    #     * server_name: the homeserver's name.
+    #     * user_id_to_verify: the MXID of the user that we are trying to
+    #       validate.
+    #
+    # * HTML page shown during single sign-on if a deactivated user (according to Synapse's database)
+    #   attempts to login: 'sso_account_deactivated.html'.
+    #
+    #   This template has no additional variables.
+    #
+    # * HTML page to display to users if something goes wrong during the
+    #   OpenID Connect authentication process: 'sso_error.html'.
+    #
+    #   When rendering, this template is given two variables:
+    #     * error: the technical name of the error
+    #     * error_description: a human-readable message for the error
+    #
+    # You can see the default templates at:
+    # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
+    #
+    #template_dir: "res/templates"
+
+
+# JSON web token integration. The following settings can be used to make
+# Synapse JSON web tokens for authentication, instead of its internal
+# password database.
+#
+# Each JSON Web Token needs to contain a "sub" (subject) claim, which is
+# used as the localpart of the mxid.
+#
+# Additionally, the expiration time ("exp"), not before time ("nbf"),
+# and issued at ("iat") claims are validated if present.
+#
+# Note that this is a non-standard login type and client support is
+# expected to be non-existent.
+#
+# See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md.
+#
+#jwt_config:
+    # Uncomment the following to enable authorization using JSON web
+    # tokens. Defaults to false.
+    #
+    #enabled: true
+
+    # This is either the private shared secret or the public key used to
+    # decode the contents of the JSON web token.
+    #
+    # Required if 'enabled' is true.
+    #
+    #secret: "provided-by-your-issuer"
+
+    # The algorithm used to sign the JSON web token.
+    #
+    # Supported algorithms are listed at
+    # https://pyjwt.readthedocs.io/en/latest/algorithms.html
+    #
+    # Required if 'enabled' is true.
+    #
+    #algorithm: "provided-by-your-issuer"
+
+    # The issuer to validate the "iss" claim against.
+    #
+    # Optional, if provided the "iss" claim will be required and
+    # validated for all JSON web tokens.
+    #
+    #issuer: "provided-by-your-issuer"
+
+    # A list of audiences to validate the "aud" claim against.
+    #
+    # Optional, if provided the "aud" claim will be required and
+    # validated for all JSON web tokens.
+    #
+    # Note that if the "aud" claim is included in a JSON web token then
+    # validation will fail without configuring audiences.
+    #
+    #audiences:
+    #    - "provided-by-your-issuer"
+
+
+password_config:
+   # Uncomment to disable password login
+   #
+   #enabled: false
+
+   # Uncomment to disable authentication against the local password
+   # database. This is ignored if `enabled` is false, and is only useful
+   # if you have other password_providers.
+   #
+   #localdb_enabled: false
+
+   # Uncomment and change to a secret random string for extra security.
+   # DO NOT CHANGE THIS AFTER INITIAL SETUP!
+   #
+   #pepper: "EVEN_MORE_SECRET"
+
+   # Define and enforce a password policy. Each parameter is optional.
+   # This is an implementation of MSC2000.
+   #
+   policy:
+      # Whether to enforce the password policy.
+      # Defaults to 'false'.
+      #
+      #enabled: true
+
+      # Minimum accepted length for a password.
+      # Defaults to 0.
+      #
+      #minimum_length: 15
+
+      # Whether a password must contain at least one digit.
+      # Defaults to 'false'.
+      #
+      #require_digit: true
+
+      # Whether a password must contain at least one symbol.
+      # A symbol is any character that's not a number or a letter.
+      # Defaults to 'false'.
+      #
+      #require_symbol: true
+
+      # Whether a password must contain at least one lowercase letter.
+      # Defaults to 'false'.
+      #
+      #require_lowercase: true
+
+      # Whether a password must contain at least one lowercase letter.
+      # Defaults to 'false'.
+      #
+      #require_uppercase: true
+
+ui_auth:
+    # The amount of time to allow a user-interactive authentication session
+    # to be active.
+    #
+    # This defaults to 0, meaning the user is queried for their credentials
+    # before every action, but this can be overridden to allow a single
+    # validation to be re-used.  This weakens the protections afforded by
+    # the user-interactive authentication process, by allowing for multiple
+    # (and potentially different) operations to use the same validation session.
+    #
+    # Uncomment below to allow for credential validation to last for 15
+    # seconds.
+    #
+    #session_timeout: "15s"
+
+
+# Configuration for sending emails from Synapse.
+#
+email:
+  # The hostname of the outgoing SMTP server to use. Defaults to 'localhost'.
+  #
+  #smtp_host: mail.server
+
+  # The port on the mail server for outgoing SMTP. Defaults to 25.
+  #
+  #smtp_port: 587
+
+  # Username/password for authentication to the SMTP server. By default, no
+  # authentication is attempted.
+  #
+  #smtp_user: "exampleusername"
+  #smtp_pass: "examplepassword"
+
+  # Uncomment the following to require TLS transport security for SMTP.
+  # By default, Synapse will connect over plain text, and will then switch to
+  # TLS via STARTTLS *if the SMTP server supports it*. If this option is set,
+  # Synapse will refuse to connect unless the server supports STARTTLS.
+  #
+  #require_transport_security: true
+
+  # notif_from defines the "From" address to use when sending emails.
+  # It must be set if email sending is enabled.
+  #
+  # The placeholder '%(app)s' will be replaced by the application name,
+  # which is normally 'app_name' (below), but may be overridden by the
+  # Matrix client application.
+  #
+  # Note that the placeholder must be written '%(app)s', including the
+  # trailing 's'.
+  #
+  #notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
+
+  # app_name defines the default value for '%(app)s' in notif_from and email
+  # subjects. It defaults to 'Matrix'.
+  #
+  #app_name: my_branded_matrix_server
+
+  # Uncomment the following to enable sending emails for messages that the user
+  # has missed. Disabled by default.
+  #
+  #enable_notifs: true
+
+  # Uncomment the following to disable automatic subscription to email
+  # notifications for new users. Enabled by default.
+  #
+  #notif_for_new_users: false
+
+  # Custom URL for client links within the email notifications. By default
+  # links will be based on "https://matrix.to".
+  #
+  # (This setting used to be called riot_base_url; the old name is still
+  # supported for backwards-compatibility but is now deprecated.)
+  #
+  #client_base_url: "http://localhost/riot"
+
+  # Configure the time that a validation email will expire after sending.
+  # Defaults to 1h.
+  #
+  #validation_token_lifetime: 15m
+
+  # The web client location to direct users to during an invite. This is passed
+  # to the identity server as the org.matrix.web_client_location key. Defaults
+  # to unset, giving no guidance to the identity server.
+  #
+  #invite_client_location: https://app.element.io
+
+  # Directory in which Synapse will try to find the template files below.
+  # If not set, or the files named below are not found within the template
+  # directory, default templates from within the Synapse package will be used.
+  #
+  # Synapse will look for the following templates in this directory:
+  #
+  # * The contents of email notifications of missed events: 'notif_mail.html' and
+  #   'notif_mail.txt'.
+  #
+  # * The contents of account expiry notice emails: 'notice_expiry.html' and
+  #   'notice_expiry.txt'.
+  #
+  # * The contents of password reset emails sent by the homeserver:
+  #   'password_reset.html' and 'password_reset.txt'
+  #
+  # * An HTML page that a user will see when they follow the link in the password
+  #   reset email. The user will be asked to confirm the action before their
+  #   password is reset: 'password_reset_confirmation.html'
+  #
+  # * HTML pages for success and failure that a user will see when they confirm
+  #   the password reset flow using the page above: 'password_reset_success.html'
+  #   and 'password_reset_failure.html'
+  #
+  # * The contents of address verification emails sent during registration:
+  #   'registration.html' and 'registration.txt'
+  #
+  # * HTML pages for success and failure that a user will see when they follow
+  #   the link in an address verification email sent during registration:
+  #   'registration_success.html' and 'registration_failure.html'
+  #
+  # * The contents of address verification emails sent when an address is added
+  #   to a Matrix account: 'add_threepid.html' and 'add_threepid.txt'
+  #
+  # * HTML pages for success and failure that a user will see when they follow
+  #   the link in an address verification email sent when an address is added
+  #   to a Matrix account: 'add_threepid_success.html' and
+  #   'add_threepid_failure.html'
+  #
+  # You can see the default templates at:
+  # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
+  #
+  #template_dir: "res/templates"
+
+  # Subjects to use when sending emails from Synapse.
+  #
+  # The placeholder '%(app)s' will be replaced with the value of the 'app_name'
+  # setting above, or by a value dictated by the Matrix client application.
+  #
+  # If a subject isn't overridden in this configuration file, the value used as
+  # its example will be used.
+  #
+  #subjects:
+
+    # Subjects for notification emails.
+    #
+    # On top of the '%(app)s' placeholder, these can use the following
+    # placeholders:
+    #
+    #   * '%(person)s', which will be replaced by the display name of the user(s)
+    #      that sent the message(s), e.g. "Alice and Bob".
+    #   * '%(room)s', which will be replaced by the name of the room the
+    #      message(s) have been sent to, e.g. "My super room".
+    #
+    # See the example provided for each setting to see which placeholder can be
+    # used and how to use them.
+    #
+    # Subject to use to notify about one message from one or more user(s) in a
+    # room which has a name.
+    #message_from_person_in_room: "[%(app)s] You have a message on %(app)s from %(person)s in the %(room)s room..."
+    #
+    # Subject to use to notify about one message from one or more user(s) in a
+    # room which doesn't have a name.
+    #message_from_person: "[%(app)s] You have a message on %(app)s from %(person)s..."
+    #
+    # Subject to use to notify about multiple messages from one or more users in
+    # a room which doesn't have a name.
+    #messages_from_person: "[%(app)s] You have messages on %(app)s from %(person)s..."
+    #
+    # Subject to use to notify about multiple messages in a room which has a
+    # name.
+    #messages_in_room: "[%(app)s] You have messages on %(app)s in the %(room)s room..."
+    #
+    # Subject to use to notify about multiple messages in multiple rooms.
+    #messages_in_room_and_others: "[%(app)s] You have messages on %(app)s in the %(room)s room and others..."
+    #
+    # Subject to use to notify about multiple messages from multiple persons in
+    # multiple rooms. This is similar to the setting above except it's used when
+    # the room in which the notification was triggered has no name.
+    #messages_from_person_and_others: "[%(app)s] You have messages on %(app)s from %(person)s and others..."
+    #
+    # Subject to use to notify about an invite to a room which has a name.
+    #invite_from_person_to_room: "[%(app)s] %(person)s has invited you to join the %(room)s room on %(app)s..."
+    #
+    # Subject to use to notify about an invite to a room which doesn't have a
+    # name.
+    #invite_from_person: "[%(app)s] %(person)s has invited you to chat on %(app)s..."
+
+    # Subject for emails related to account administration.
+    #
+    # On top of the '%(app)s' placeholder, these one can use the
+    # '%(server_name)s' placeholder, which will be replaced by the value of the
+    # 'server_name' setting in your Synapse configuration.
+    #
+    # Subject to use when sending a password reset email.
+    #password_reset: "[%(server_name)s] Password reset"
+    #
+    # Subject to use when sending a verification email to assert an address's
+    # ownership.
+    #email_validation: "[%(server_name)s] Validate your email"
+
+
+# Password providers allow homeserver administrators to integrate
+# their Synapse installation with existing authentication methods
+# ex. LDAP, external tokens, etc.
+#
+# For more information and known implementations, please see
+# https://github.com/matrix-org/synapse/blob/master/docs/password_auth_providers.md
+#
+# Note: instances wishing to use SAML or CAS authentication should
+# instead use the `saml2_config` or `cas_config` options,
+# respectively.
+#
+password_providers:
+#    # Example config for an LDAP auth provider
+#    - module: "ldap_auth_provider.LdapAuthProvider"
+#      config:
+#        enabled: true
+#        uri: "ldap://ldap.example.com:389"
+#        start_tls: true
+#        base: "ou=users,dc=example,dc=com"
+#        attributes:
+#           uid: "cn"
+#           mail: "email"
+#           name: "givenName"
+#        #bind_dn:
+#        #bind_password:
+#        #filter: "(objectClass=posixAccount)"
+
+
+
+## Push ##
+
+push:
+  # Clients requesting push notifications can either have the body of
+  # the message sent in the notification poke along with other details
+  # like the sender, or just the event ID and room ID (`event_id_only`).
+  # If clients choose the former, this option controls whether the
+  # notification request includes the content of the event (other details
+  # like the sender are still included). For `event_id_only` push, it
+  # has no effect.
+  #
+  # For modern android devices the notification content will still appear
+  # because it is loaded by the app. iPhone, however will send a
+  # notification saying only that a message arrived and who it came from.
+  #
+  # The default value is "true" to include message details. Uncomment to only
+  # include the event ID and room ID in push notification payloads.
+  #
+  #include_content: false
+
+  # When a push notification is received, an unread count is also sent.
+  # This number can either be calculated as the number of unread messages
+  # for the user, or the number of *rooms* the user has unread messages in.
+  #
+  # The default value is "true", meaning push clients will see the number of
+  # rooms with unread messages in them. Uncomment to instead send the number
+  # of unread messages.
+  #
+  #group_unread_count_by_room: false
+
+
+# Spam checkers are third-party modules that can block specific actions
+# of local users, such as creating rooms and registering undesirable
+# usernames, as well as remote users by redacting incoming events.
+#
+spam_checker:
+   #- module: "my_custom_project.SuperSpamChecker"
+   #  config:
+   #    example_option: 'things'
+   #- module: "some_other_project.BadEventStopper"
+   #  config:
+   #    example_stop_events_from: ['@bad:example.com']
+
+
+## Rooms ##
+
+# Controls whether locally-created rooms should be end-to-end encrypted by
+# default.
+#
+# Possible options are "all", "invite", and "off". They are defined as:
+#
+# * "all": any locally-created room
+# * "invite": any room created with the "private_chat" or "trusted_private_chat"
+#             room creation presets
+# * "off": this option will take no effect
+#
+# The default value is "off".
+#
+# Note that this option will only affect rooms created after it is set. It
+# will also not affect rooms created by other servers.
+#
+#encryption_enabled_by_default_for_room_type: invite
+
+
+# Uncomment to allow non-server-admin users to create groups on this server
+#
+#enable_group_creation: true
+
+# If enabled, non server admins can only create groups with local parts
+# starting with this prefix
+#
+#group_creation_prefix: "unofficial_"
+
+
+
+# User Directory configuration
+#
+user_directory:
+    # Defines whether users can search the user directory. If false then
+    # empty responses are returned to all queries. Defaults to true.
+    #
+    # Uncomment to disable the user directory.
+    #
+    #enabled: false
+
+    # Defines whether to search all users visible to your HS when searching
+    # the user directory, rather than limiting to users visible in public
+    # rooms. Defaults to false.
+    #
+    # If you set it true, you'll have to rebuild the user_directory search
+    # indexes, see:
+    # https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md
+    #
+    # Uncomment to return search results containing all known users, even if that
+    # user does not share a room with the requester.
+    #
+    #search_all_users: true
+
+    # Defines whether to prefer local users in search query results.
+    # If True, local users are more likely to appear above remote users
+    # when searching the user directory. Defaults to false.
+    #
+    # Uncomment to prefer local over remote users in user directory search
+    # results.
+    #
+    #prefer_local_users: true
+
+
+# User Consent configuration
+#
+# for detailed instructions, see
+# https://github.com/matrix-org/synapse/blob/master/docs/consent_tracking.md
+#
+# Parts of this section are required if enabling the 'consent' resource under
+# 'listeners', in particular 'template_dir' and 'version'.
+#
+# 'template_dir' gives the location of the templates for the HTML forms.
+# This directory should contain one subdirectory per language (eg, 'en', 'fr'),
+# and each language directory should contain the policy document (named as
+# '<version>.html') and a success page (success.html).
+#
+# 'version' specifies the 'current' version of the policy document. It defines
+# the version to be served by the consent resource if there is no 'v'
+# parameter.
+#
+# 'server_notice_content', if enabled, will send a user a "Server Notice"
+# asking them to consent to the privacy policy. The 'server_notices' section
+# must also be configured for this to work. Notices will *not* be sent to
+# guest users unless 'send_server_notice_to_guests' is set to true.
+#
+# 'block_events_error', if set, will block any attempts to send events
+# until the user consents to the privacy policy. The value of the setting is
+# used as the text of the error.
+#
+# 'require_at_registration', if enabled, will add a step to the registration
+# process, similar to how captcha works. Users will be required to accept the
+# policy before their account is created.
+#
+# 'policy_name' is the display name of the policy users will see when registering
+# for an account. Has no effect unless `require_at_registration` is enabled.
+# Defaults to "Privacy Policy".
+#
+#user_consent:
+#  template_dir: res/templates/privacy
+#  version: 1.0
+#  server_notice_content:
+#    msgtype: m.text
+#    body: >-
+#      To continue using this homeserver you must review and agree to the
+#      terms and conditions at %(consent_uri)s
+#  send_server_notice_to_guests: true
+#  block_events_error: >-
+#    To continue using this homeserver you must review and agree to the
+#    terms and conditions at %(consent_uri)s
+#  require_at_registration: false
+#  policy_name: Privacy Policy
+#
+
+
+
+# Settings for local room and user statistics collection. See
+# docs/room_and_user_statistics.md.
+#
+stats:
+  # Uncomment the following to disable room and user statistics. Note that doing
+  # so may cause certain features (such as the room directory) not to work
+  # correctly.
+  #
+  #enabled: false
+
+  # The size of each timeslice in the room_stats_historical and
+  # user_stats_historical tables, as a time period. Defaults to "1d".
+  #
+  #bucket_size: 1h
+
+
+# Server Notices room configuration
+#
+# Uncomment this section to enable a room which can be used to send notices
+# from the server to users. It is a special room which cannot be left; notices
+# come from a special "notices" user id.
+#
+# If you uncomment this section, you *must* define the system_mxid_localpart
+# setting, which defines the id of the user which will be used to send the
+# notices.
+#
+# It's also possible to override the room name, the display name of the
+# "notices" user, and the avatar for the user.
+#
+#server_notices:
+#  system_mxid_localpart: notices
+#  system_mxid_display_name: "Server Notices"
+#  system_mxid_avatar_url: "mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ"
+#  room_name: "Server Notices"
+
+
+
+# Uncomment to disable searching the public room list. When disabled
+# blocks searching local and remote room lists for local and remote
+# users by always returning an empty list for all queries.
+#
+#enable_room_list_search: false
+
+# The `alias_creation` option controls who's allowed to create aliases
+# on this server.
+#
+# The format of this option is a list of rules that contain globs that
+# match against user_id, room_id and the new alias (fully qualified with
+# server name). The action in the first rule that matches is taken,
+# which can currently either be "allow" or "deny".
+#
+# Missing user_id/room_id/alias fields default to "*".
+#
+# If no rules match the request is denied. An empty list means no one
+# can create aliases.
+#
+# Options for the rules include:
+#
+#   user_id: Matches against the creator of the alias
+#   alias: Matches against the alias being created
+#   room_id: Matches against the room ID the alias is being pointed at
+#   action: Whether to "allow" or "deny" the request if the rule matches
+#
+# The default is:
+#
+#alias_creation_rules:
+#  - user_id: "*"
+#    alias: "*"
+#    room_id: "*"
+#    action: allow
+
+# The `room_list_publication_rules` option controls who can publish and
+# which rooms can be published in the public room list.
+#
+# The format of this option is the same as that for
+# `alias_creation_rules`.
+#
+# If the room has one or more aliases associated with it, only one of
+# the aliases needs to match the alias rule. If there are no aliases
+# then only rules with `alias: *` match.
+#
+# If no rules match the request is denied. An empty list means no one
+# can publish rooms.
+#
+# Options for the rules include:
+#
+#   user_id: Matches against the creator of the alias
+#   room_id: Matches against the room ID being published
+#   alias: Matches against any current local or canonical aliases
+#            associated with the room
+#   action: Whether to "allow" or "deny" the request if the rule matches
+#
+# The default is:
+#
+#room_list_publication_rules:
+#  - user_id: "*"
+#    alias: "*"
+#    room_id: "*"
+#    action: allow
+
+
+# Server admins can define a Python module that implements extra rules for
+# allowing or denying incoming events. In order to work, this module needs to
+# override the methods defined in synapse/events/third_party_rules.py.
+#
+# This feature is designed to be used in closed federations only, where each
+# participating server enforces the same rules.
+#
+#third_party_event_rules:
+#  module: "my_custom_project.SuperRulesSet"
+#  config:
+#    example_option: 'things'
+
+
+## Opentracing ##
+
+# These settings enable opentracing, which implements distributed tracing.
+# This allows you to observe the causal chains of events across servers
+# including requests, key lookups etc., across any server running
+# synapse or any other other services which supports opentracing
+# (specifically those implemented with Jaeger).
+#
+opentracing:
+    # tracing is disabled by default. Uncomment the following line to enable it.
+    #
+    #enabled: true
+
+    # The list of homeservers we wish to send and receive span contexts and span baggage.
+    # See docs/opentracing.rst.
+    #
+    # This is a list of regexes which are matched against the server_name of the
+    # homeserver.
+    #
+    # By default, it is empty, so no servers are matched.
+    #
+    #homeserver_whitelist:
+    #  - ".*"
+
+    # A list of the matrix IDs of users whose requests will always be traced,
+    # even if the tracing system would otherwise drop the traces due to
+    # probabilistic sampling.
+    #
+    # By default, the list is empty.
+    #
+    #force_tracing_for_users:
+    #  - "@user1:server_name"
+    #  - "@user2:server_name"
+
+    # Jaeger can be configured to sample traces at different rates.
+    # All configuration options provided by Jaeger can be set here.
+    # Jaeger's configuration is mostly related to trace sampling which
+    # is documented here:
+    # https://www.jaegertracing.io/docs/latest/sampling/.
+    #
+    #jaeger_config:
+    #  sampler:
+    #    type: const
+    #    param: 1
+    #  logging:
+    #    false
+
+
+## Workers ##
+
+# Disables sending of outbound federation transactions on the main process.
+# Uncomment if using a federation sender worker.
+#
+#send_federation: false
+
+# It is possible to run multiple federation sender workers, in which case the
+# work is balanced across them.
+#
+# This configuration must be shared between all federation sender workers, and if
+# changed all federation sender workers must be stopped at the same time and then
+# started, to ensure that all instances are running with the same config (otherwise
+# events may be dropped).
+#
+#federation_sender_instances:
+#  - federation_sender1
+
+# When using workers this should be a map from `worker_name` to the
+# HTTP replication listener of the worker, if configured.
+#
+#instance_map:
+#  worker1:
+#    host: localhost
+#    port: 8034
+
+# Experimental: When using workers you can define which workers should
+# handle event persistence and typing notifications. Any worker
+# specified here must also be in the `instance_map`.
+#
+#stream_writers:
+#  events: worker1
+#  typing: worker1
+
+# The worker that is used to run background tasks (e.g. cleaning up expired
+# data). If not provided this defaults to the main process.
+#
+#run_background_tasks_on: worker1
+
+# A shared secret used by the replication APIs to authenticate HTTP requests
+# from workers.
+#
+# By default this is unused and traffic is not authenticated.
+#
+#worker_replication_secret: ""
+
+
+# Configuration for Redis when using workers. This *must* be enabled when
+# using workers (unless using old style direct TCP configuration).
+#
+redis:
+  # Uncomment the below to enable Redis support.
+  #
+  #enabled: true
+
+  # Optional host and port to use to connect to redis. Defaults to
+  # localhost and 6379
+  #
+  #host: localhost
+  #port: 6379
+
+  # Optional password if configured on the Redis instance
+  #
+  #password: <secret_password>
+
+

Logging Sample Configuration File

+

Below is a sample logging configuration file. This file can be tweaked to control how your +homeserver will output logs. A restart of the server is generally required to apply any +changes made to this file.

+

Note that the contents below are not intended to be copied and used as the basis for +a real homeserver.yaml. Instead, if you are starting from scratch, please generate +a fresh config using Synapse by following the instructions in +Installation.

+
# Log configuration for Synapse.
+#
+# This is a YAML file containing a standard Python logging configuration
+# dictionary. See [1] for details on the valid settings.
+#
+# Synapse also supports structured logging for machine readable logs which can
+# be ingested by ELK stacks. See [2] for details.
+#
+# [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
+# [2]: https://github.com/matrix-org/synapse/blob/master/docs/structured_logging.md
+
+version: 1
+
+formatters:
+    precise:
+        format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
+
+handlers:
+    file:
+        class: logging.handlers.TimedRotatingFileHandler
+        formatter: precise
+        filename: /var/log/matrix-synapse/homeserver.log
+        when: midnight
+        backupCount: 3  # Does not include the current log file.
+        encoding: utf8
+
+    # Default to buffering writes to log file for efficiency. This means that
+    # will be a delay for INFO/DEBUG logs to get written, but WARNING/ERROR
+    # logs will still be flushed immediately.
+    buffer:
+        class: logging.handlers.MemoryHandler
+        target: file
+        # The capacity is the number of log lines that are buffered before
+        # being written to disk. Increasing this will lead to better
+        # performance, at the expensive of it taking longer for log lines to
+        # be written to disk.
+        capacity: 10
+        flushLevel: 30  # Flush for WARNING logs as well
+
+    # A handler that writes logs to stderr. Unused by default, but can be used
+    # instead of "buffer" and "file" in the logger handlers.
+    console:
+        class: logging.StreamHandler
+        formatter: precise
+
+loggers:
+    synapse.storage.SQL:
+        # beware: increasing this to DEBUG will make synapse log sensitive
+        # information such as access tokens.
+        level: INFO
+
+    twisted:
+        # We send the twisted logging directly to the file handler,
+        # to work around https://github.com/matrix-org/synapse/issues/3471
+        # when using "buffer" logger. Use "console" to log to stderr instead.
+        handlers: [file]
+        propagate: false
+
+root:
+    level: INFO
+
+    # Write logs to the `buffer` handler, which will buffer them together in memory,
+    # then write them to a file.
+    #
+    # Replace "buffer" with "console" to log to stderr instead. (Note that you'll
+    # also need to update the configuration for the `twisted` logger above, in
+    # this case.)
+    #
+    handlers: [buffer]
+
+disable_existing_loggers: false
+``__`
+

Structured Logging

+

A structured logging system can be useful when your logs are destined for a +machine to parse and process. By maintaining its machine-readable characteristics, +it enables more efficient searching and aggregations when consumed by software +such as the "ELK stack".

+

Synapse's structured logging system is configured via the file that Synapse's +log_config config option points to. The file should include a formatter which +uses the synapse.logging.TerseJsonFormatter class included with Synapse and a +handler which uses the above formatter.

+

There is also a synapse.logging.JsonFormatter option which does not include +a timestamp in the resulting JSON. This is useful if the log ingester adds its +own timestamp.

+

A structured logging configuration looks similar to the following:

+
version: 1
+
+formatters:
+    structured:
+        class: synapse.logging.TerseJsonFormatter
+
+handlers:
+    file:
+        class: logging.handlers.TimedRotatingFileHandler
+        formatter: structured
+        filename: /path/to/my/logs/homeserver.log
+        when: midnight
+        backupCount: 3  # Does not include the current log file.
+        encoding: utf8
+
+loggers:
+    synapse:
+        level: INFO
+        handlers: [remote]
+    synapse.storage.SQL:
+        level: WARNING
+
+

The above logging config will set Synapse as 'INFO' logging level by default, +with the SQL layer at 'WARNING', and will log to a file, stored as JSON.

+

It is also possible to figure Synapse to log to a remote endpoint by using the +synapse.logging.RemoteHandler class included with Synapse. It takes the +following arguments:

+
    +
  • host: Hostname or IP address of the log aggregator.
  • +
  • port: Numerical port to contact on the host.
  • +
  • maximum_buffer: (Optional, defaults to 1000) The maximum buffer size to allow.
  • +
+

A remote structured logging configuration looks similar to the following:

+
version: 1
+
+formatters:
+    structured:
+        class: synapse.logging.TerseJsonFormatter
+
+handlers:
+    remote:
+        class: synapse.logging.RemoteHandler
+        formatter: structured
+        host: 10.1.2.3
+        port: 9999
+
+loggers:
+    synapse:
+        level: INFO
+        handlers: [remote]
+    synapse.storage.SQL:
+        level: WARNING
+
+

The above logging config will set Synapse as 'INFO' logging level by default, +with the SQL layer at 'WARNING', and will log JSON formatted messages to a +remote endpoint at 10.1.2.3:9999.

+

Upgrading from legacy structured logging configuration

+

Versions of Synapse prior to v1.23.0 included a custom structured logging +configuration which is deprecated. It used a structured: true flag and +configured drains instead of handlers and formatters.

+

Synapse currently automatically converts the old configuration to the new +configuration, but this will be removed in a future version of Synapse. The +following reference can be used to update your configuration. Based on the drain +type, we can pick a new handler:

+
    +
  1. For a type of console, console_json, or console_json_terse: a handler +with a class of logging.StreamHandler and a stream of ext://sys.stdout +or ext://sys.stderr should be used.
  2. +
  3. For a type of file or file_json: a handler of logging.FileHandler with +a location of the file path should be used.
  4. +
  5. For a type of network_json_terse: a handler of synapse.logging.RemoteHandler +with the host and port should be used.
  6. +
+

Then based on the drain type we can pick a new formatter:

+
    +
  1. For a type of console or file no formatter is necessary.
  2. +
  3. For a type of console_json or file_json: a formatter of +synapse.logging.JsonFormatter should be used.
  4. +
  5. For a type of console_json_terse or network_json_terse: a formatter of +synapse.logging.TerseJsonFormatter should be used.
  6. +
+

For each new handler and formatter they should be added to the logging configuration +and then assigned to either a logger or the root logger.

+

An example legacy configuration:

+
structured: true
+
+loggers:
+    synapse:
+        level: INFO
+    synapse.storage.SQL:
+        level: WARNING
+
+drains:
+    console:
+        type: console
+        location: stdout
+    file:
+        type: file_json
+        location: homeserver.log
+
+

Would be converted into a new configuration:

+
version: 1
+
+formatters:
+    json:
+        class: synapse.logging.JsonFormatter
+
+handlers:
+    console:
+        class: logging.StreamHandler
+        location: ext://sys.stdout
+    file:
+        class: logging.FileHandler
+        formatter: json
+        filename: homeserver.log
+
+loggers:
+    synapse:
+        level: INFO
+        handlers: [console, file]
+    synapse.storage.SQL:
+        level: WARNING
+
+

The new logging configuration is a bit more verbose, but significantly more +flexible. It allows for configuration that were not previously possible, such as +sending plain logs over the network, or using different handlers for different +modules.

+

User Authentication

+

Synapse supports multiple methods of authenticating users, either out-of-the-box or through custom pluggable +authentication modules.

+

Included in Synapse is support for authenticating users via:

+
    +
  • A username and password.
  • +
  • An email address and password.
  • +
  • Single Sign-On through the SAML, Open ID Connect or CAS protocols.
  • +
  • JSON Web Tokens.
  • +
  • An administrator's shared secret.
  • +
+

Synapse can additionally be extended to support custom authentication schemes through optional "password auth provider" +modules.

+

Configuring Synapse to authenticate against an OpenID Connect provider

+

Synapse can be configured to use an OpenID Connect Provider (OP) for +authentication, instead of its own local password database.

+

Any OP should work with Synapse, as long as it supports the authorization code +flow. There are a few options for that:

+
    +
  • +

    start a local OP. Synapse has been tested with Hydra and +Dex. Note that for an OP to work, it should be served under a +secure (HTTPS) origin. A certificate signed with a self-signed, locally +trusted CA should work. In that case, start Synapse with a SSL_CERT_FILE +environment variable set to the path of the CA.

    +
  • +
  • +

    set up a SaaS OP, like Google, Auth0 or +Okta. Synapse has been tested with Auth0 and Google.

    +
  • +
+

It may also be possible to use other OAuth2 providers which provide the +authorization code grant type, +such as Github.

+

Preparing Synapse

+

The OpenID integration in Synapse uses the +authlib library, which must be installed +as follows:

+
    +
  • +

    The relevant libraries are included in the Docker images and Debian packages +provided by matrix.org so no further action is needed.

    +
  • +
  • +

    If you installed Synapse into a virtualenv, run /path/to/env/bin/pip install matrix-synapse[oidc] to install the necessary dependencies.

    +
  • +
  • +

    For other installation mechanisms, see the documentation provided by the +maintainer.

    +
  • +
+

To enable the OpenID integration, you should then add a section to the oidc_providers +setting in your configuration file (or uncomment one of the existing examples). +See sample_config.yaml for some sample settings, as well as +the text below for example configurations for specific providers.

+

Sample configs

+

Here are a few configs for providers that should work with Synapse.

+

Microsoft Azure Active Directory

+

Azure AD can act as an OpenID Connect Provider. Register a new application under +App registrations in the Azure AD management console. The RedirectURI for your +application should point to your matrix server: +[synapse public baseurl]/_synapse/client/oidc/callback

+

Go to Certificates & secrets and register a new client secret. Make note of your +Directory (tenant) ID as it will be used in the Azure links. +Edit your Synapse config file and change the oidc_config section:

+
oidc_providers:
+  - idp_id: microsoft
+    idp_name: Microsoft
+    issuer: "https://login.microsoftonline.com/<tenant id>/v2.0"
+    client_id: "<client id>"
+    client_secret: "<client secret>"
+    scopes: ["openid", "profile"]
+    authorization_endpoint: "https://login.microsoftonline.com/<tenant id>/oauth2/v2.0/authorize"
+    token_endpoint: "https://login.microsoftonline.com/<tenant id>/oauth2/v2.0/token"
+    userinfo_endpoint: "https://graph.microsoft.com/oidc/userinfo"
+
+    user_mapping_provider:
+      config:
+        localpart_template: "{{ user.preferred_username.split('@')[0] }}"
+        display_name_template: "{{ user.name }}"
+
+

Dex

+

Dex is a simple, open-source, certified OpenID Connect Provider. +Although it is designed to help building a full-blown provider with an +external database, it can be configured with static passwords in a config file.

+

Follow the Getting Started guide +to install Dex.

+

Edit examples/config-dev.yaml config file from the Dex repo to add a client:

+
staticClients:
+- id: synapse
+  secret: secret
+  redirectURIs:
+  - '[synapse public baseurl]/_synapse/client/oidc/callback'
+  name: 'Synapse'
+
+

Run with dex serve examples/config-dev.yaml.

+

Synapse config:

+
oidc_providers:
+  - idp_id: dex
+    idp_name: "My Dex server"
+    skip_verification: true # This is needed as Dex is served on an insecure endpoint
+    issuer: "http://127.0.0.1:5556/dex"
+    client_id: "synapse"
+    client_secret: "secret"
+    scopes: ["openid", "profile"]
+    user_mapping_provider:
+      config:
+        localpart_template: "{{ user.name }}"
+        display_name_template: "{{ user.name|capitalize }}"
+
+

Keycloak

+

Keycloak is an opensource IdP maintained by Red Hat.

+

Follow the Getting Started Guide to install Keycloak and set up a realm.

+
    +
  1. +

    Click Clients in the sidebar and click Create

    +
  2. +
  3. +

    Fill in the fields as below:

    +
  4. +
+ + + +
FieldValue
Client IDsynapse
Client Protocolopenid-connect
+
    +
  1. Click Save
  2. +
  3. Fill in the fields as below:
  4. +
+ + + + + + +
FieldValue
Client IDsynapse
EnabledOn
Client Protocolopenid-connect
Access Typeconfidential
Valid Redirect URIs[synapse public baseurl]/_synapse/client/oidc/callback
+
    +
  1. Click Save
  2. +
  3. On the Credentials tab, update the fields:
  4. +
+ + +
FieldValue
Client AuthenticatorClient ID and Secret
+
    +
  1. Click Regenerate Secret
  2. +
  3. Copy Secret
  4. +
+
oidc_providers:
+  - idp_id: keycloak
+    idp_name: "My KeyCloak server"
+    issuer: "https://127.0.0.1:8443/auth/realms/{realm_name}"
+    client_id: "synapse"
+    client_secret: "copy secret generated from above"
+    scopes: ["openid", "profile"]
+    user_mapping_provider:
+      config:
+        localpart_template: "{{ user.preferred_username }}"
+        display_name_template: "{{ user.name }}"
+
+

Auth0

+
    +
  1. +

    Create a regular web application for Synapse

    +
  2. +
  3. +

    Set the Allowed Callback URLs to [synapse public baseurl]/_synapse/client/oidc/callback

    +
  4. +
  5. +

    Add a rule to add the preferred_username claim.

    +
    + Code sample +
    function addPersistenceAttribute(user, context, callback) {
    +  user.user_metadata = user.user_metadata || {};
    +  user.user_metadata.preferred_username = user.user_metadata.preferred_username || user.user_id;
    +  context.idToken.preferred_username = user.user_metadata.preferred_username;
    +
    +  auth0.users.updateUserMetadata(user.user_id, user.user_metadata)
    +    .then(function(){
    +        callback(null, user, context);
    +    })
    +    .catch(function(err){
    +        callback(err);
    +    });
    +}
    +
    +
  6. +
+ +

Synapse config:

+
oidc_providers:
+  - idp_id: auth0
+    idp_name: Auth0
+    issuer: "https://your-tier.eu.auth0.com/" # TO BE FILLED
+    client_id: "your-client-id" # TO BE FILLED
+    client_secret: "your-client-secret" # TO BE FILLED
+    scopes: ["openid", "profile"]
+    user_mapping_provider:
+      config:
+        localpart_template: "{{ user.preferred_username }}"
+        display_name_template: "{{ user.name }}"
+
+

GitHub

+

GitHub is a bit special as it is not an OpenID Connect compliant provider, but +just a regular OAuth2 provider.

+

The /user API endpoint +can be used to retrieve information on the authenticated user. As the Synapse +login mechanism needs an attribute to uniquely identify users, and that endpoint +does not return a sub property, an alternative subject_claim has to be set.

+
    +
  1. Create a new OAuth application: https://github.com/settings/applications/new.
  2. +
  3. Set the callback URL to [synapse public baseurl]/_synapse/client/oidc/callback.
  4. +
+

Synapse config:

+
oidc_providers:
+  - idp_id: github
+    idp_name: Github
+    idp_brand: "github"  # optional: styling hint for clients
+    discover: false
+    issuer: "https://github.com/"
+    client_id: "your-client-id" # TO BE FILLED
+    client_secret: "your-client-secret" # TO BE FILLED
+    authorization_endpoint: "https://github.com/login/oauth/authorize"
+    token_endpoint: "https://github.com/login/oauth/access_token"
+    userinfo_endpoint: "https://api.github.com/user"
+    scopes: ["read:user"]
+    user_mapping_provider:
+      config:
+        subject_claim: "id"
+        localpart_template: "{{ user.login }}"
+        display_name_template: "{{ user.name }}"
+
+

Google

+
    +
  1. Set up a project in the Google API Console (see +https://developers.google.com/identity/protocols/oauth2/openid-connect#appsetup).
  2. +
  3. add an "OAuth Client ID" for a Web Application under "Credentials".
  4. +
  5. Copy the Client ID and Client Secret, and add the following to your synapse config: +
    oidc_providers:
    +  - idp_id: google
    +    idp_name: Google
    +    idp_brand: "google"  # optional: styling hint for clients
    +    issuer: "https://accounts.google.com/"
    +    client_id: "your-client-id" # TO BE FILLED
    +    client_secret: "your-client-secret" # TO BE FILLED
    +    scopes: ["openid", "profile"]
    +    user_mapping_provider:
    +      config:
    +        localpart_template: "{{ user.given_name|lower }}"
    +        display_name_template: "{{ user.name }}"
    +
    +
  6. +
  7. Back in the Google console, add this Authorized redirect URI: [synapse public baseurl]/_synapse/client/oidc/callback.
  8. +
+

Twitch

+
    +
  1. Setup a developer account on Twitch
  2. +
  3. Obtain the OAuth 2.0 credentials by creating an app
  4. +
  5. Add this OAuth Redirect URL: [synapse public baseurl]/_synapse/client/oidc/callback
  6. +
+

Synapse config:

+
oidc_providers:
+  - idp_id: twitch
+    idp_name: Twitch
+    issuer: "https://id.twitch.tv/oauth2/"
+    client_id: "your-client-id" # TO BE FILLED
+    client_secret: "your-client-secret" # TO BE FILLED
+    client_auth_method: "client_secret_post"
+    user_mapping_provider:
+      config:
+        localpart_template: "{{ user.preferred_username }}"
+        display_name_template: "{{ user.name }}"
+
+

GitLab

+
    +
  1. Create a new application.
  2. +
  3. Add the read_user and openid scopes.
  4. +
  5. Add this Callback URL: [synapse public baseurl]/_synapse/client/oidc/callback
  6. +
+

Synapse config:

+
oidc_providers:
+  - idp_id: gitlab
+    idp_name: Gitlab
+    idp_brand: "gitlab"  # optional: styling hint for clients
+    issuer: "https://gitlab.com/"
+    client_id: "your-client-id" # TO BE FILLED
+    client_secret: "your-client-secret" # TO BE FILLED
+    client_auth_method: "client_secret_post"
+    scopes: ["openid", "read_user"]
+    user_profile_method: "userinfo_endpoint"
+    user_mapping_provider:
+      config:
+        localpart_template: '{{ user.nickname }}'
+        display_name_template: '{{ user.name }}'
+
+

Facebook

+

Like Github, Facebook provide a custom OAuth2 API rather than an OIDC-compliant +one so requires a little more configuration.

+
    +
  1. You will need a Facebook developer account. You can register for one +here.
  2. +
  3. On the apps page of the developer +console, "Create App", and choose "Build Connected Experiences".
  4. +
  5. Once the app is created, add "Facebook Login" and choose "Web". You don't +need to go through the whole form here.
  6. +
  7. In the left-hand menu, open "Products"/"Facebook Login"/"Settings". +
      +
    • Add [synapse public baseurl]/_synapse/client/oidc/callback as an OAuth Redirect +URL.
    • +
    +
  8. +
  9. In the left-hand menu, open "Settings/Basic". Here you can copy the "App ID" +and "App Secret" for use below.
  10. +
+

Synapse config:

+
  - idp_id: facebook
+    idp_name: Facebook
+    idp_brand: "facebook"  # optional: styling hint for clients
+    discover: false
+    issuer: "https://facebook.com"
+    client_id: "your-client-id" # TO BE FILLED
+    client_secret: "your-client-secret" # TO BE FILLED
+    scopes: ["openid", "email"]
+    authorization_endpoint: https://facebook.com/dialog/oauth
+    token_endpoint: https://graph.facebook.com/v9.0/oauth/access_token
+    user_profile_method: "userinfo_endpoint"
+    userinfo_endpoint: "https://graph.facebook.com/v9.0/me?fields=id,name,email,picture"
+    user_mapping_provider:
+      config:
+        subject_claim: "id"
+        display_name_template: "{{ user.name }}"
+
+

Relevant documents:

+
    +
  • https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow
  • +
  • Using Facebook's Graph API: https://developers.facebook.com/docs/graph-api/using-graph-api/
  • +
  • Reference to the User endpoint: https://developers.facebook.com/docs/graph-api/reference/user
  • +
+

Gitea

+

Gitea is, like Github, not an OpenID provider, but just an OAuth2 provider.

+

The /user API endpoint +can be used to retrieve information on the authenticated user. As the Synapse +login mechanism needs an attribute to uniquely identify users, and that endpoint +does not return a sub property, an alternative subject_claim has to be set.

+
    +
  1. Create a new application.
  2. +
  3. Add this Callback URL: [synapse public baseurl]/_synapse/client/oidc/callback
  4. +
+

Synapse config:

+
oidc_providers:
+  - idp_id: gitea
+    idp_name: Gitea
+    discover: false
+    issuer: "https://your-gitea.com/"
+    client_id: "your-client-id" # TO BE FILLED
+    client_secret: "your-client-secret" # TO BE FILLED
+    client_auth_method: client_secret_post
+    scopes: [] # Gitea doesn't support Scopes
+    authorization_endpoint: "https://your-gitea.com/login/oauth/authorize"
+    token_endpoint: "https://your-gitea.com/login/oauth/access_token"
+    userinfo_endpoint: "https://your-gitea.com/api/v1/user"
+    user_mapping_provider:
+      config:
+        subject_claim: "id"
+        localpart_template: "{{ user.login }}"
+        display_name_template: "{{ user.full_name }}"
+
+

XWiki

+

Install OpenID Connect Provider extension in your XWiki instance.

+

Synapse config:

+
oidc_providers:
+  - idp_id: xwiki
+    idp_name: "XWiki"
+    issuer: "https://myxwikihost/xwiki/oidc/"
+    client_id: "your-client-id" # TO BE FILLED
+    client_auth_method: none
+    scopes: ["openid", "profile"]
+    user_profile_method: "userinfo_endpoint"
+    user_mapping_provider:
+      config:
+        localpart_template: "{{ user.preferred_username }}"
+        display_name_template: "{{ user.name }}"
+
+

Apple

+

Configuring "Sign in with Apple" (SiWA) requires an Apple Developer account.

+

You will need to create a new "Services ID" for SiWA, and create and download a +private key with "SiWA" enabled.

+

As well as the private key file, you will need:

+
    +
  • Client ID: the "identifier" you gave the "Services ID"
  • +
  • Team ID: a 10-character ID associated with your developer account.
  • +
  • Key ID: the 10-character identifier for the key.
  • +
+

https://help.apple.com/developer-account/?lang=en#/dev77c875b7e has more +documentation on setting up SiWA.

+

The synapse config will look like this:

+
  - idp_id: apple
+    idp_name: Apple
+    issuer: "https://appleid.apple.com"
+    client_id: "your-client-id" # Set to the "identifier" for your "ServicesID"
+    client_auth_method: "client_secret_post"
+    client_secret_jwt_key:
+      key_file: "/path/to/AuthKey_KEYIDCODE.p8"  # point to your key file
+      jwt_header:
+        alg: ES256
+        kid: "KEYIDCODE"   # Set to the 10-char Key ID
+      jwt_payload:
+        iss: TEAMIDCODE    # Set to the 10-char Team ID
+    scopes: ["name", "email", "openid"]
+    authorization_endpoint: https://appleid.apple.com/auth/authorize?response_mode=form_post
+    user_mapping_provider:
+      config:
+        email_template: "{{ user.email }}"
+
+

SSO Mapping Providers

+

A mapping provider is a Python class (loaded via a Python module) that +works out how to map attributes of a SSO response to Matrix-specific +user attributes. Details such as user ID localpart, displayname, and even avatar +URLs are all things that can be mapped from talking to a SSO service.

+

As an example, a SSO service may return the email address +"john.smith@example.com" for a user, whereas Synapse will need to figure out how +to turn that into a displayname when creating a Matrix user for this individual. +It may choose John Smith, or Smith, John [Example.com] or any number of +variations. As each Synapse configuration may want something different, this is +where SAML mapping providers come into play.

+

SSO mapping providers are currently supported for OpenID and SAML SSO +configurations. Please see the details below for how to implement your own.

+

It is up to the mapping provider whether the user should be assigned a predefined +Matrix ID based on the SSO attributes, or if the user should be allowed to +choose their own username.

+

In the first case - where users are automatically allocated a Matrix ID - it is +the responsibility of the mapping provider to normalise the SSO attributes and +map them to a valid Matrix ID. The specification for Matrix +IDs has some +information about what is considered valid.

+

If the mapping provider does not assign a Matrix ID, then Synapse will +automatically serve an HTML page allowing the user to pick their own username.

+

External mapping providers are provided to Synapse in the form of an external +Python module. You can retrieve this module from PyPI or elsewhere, +but it must be importable via Synapse (e.g. it must be in the same virtualenv +as Synapse). The Synapse config is then modified to point to the mapping provider +(and optionally provide additional configuration for it).

+

OpenID Mapping Providers

+

The OpenID mapping provider can be customized by editing the +oidc_config.user_mapping_provider.module config option.

+

oidc_config.user_mapping_provider.config allows you to provide custom +configuration options to the module. Check with the module's documentation for +what options it provides (if any). The options listed by default are for the +user mapping provider built in to Synapse. If using a custom module, you should +comment these options out and use those specified by the module instead.

+

Building a Custom OpenID Mapping Provider

+

A custom mapping provider must specify the following methods:

+
    +
  • __init__(self, parsed_config) +
      +
    • Arguments: +
        +
      • parsed_config - A configuration object that is the return value of the +parse_config method. You should set any configuration options needed by +the module here.
      • +
      +
    • +
    +
  • +
  • parse_config(config) +
      +
    • This method should have the @staticmethod decoration.
    • +
    • Arguments: +
        +
      • config - A dict representing the parsed content of the +oidc_config.user_mapping_provider.config homeserver config option. +Runs on homeserver startup. Providers should extract and validate +any option values they need here.
      • +
      +
    • +
    • Whatever is returned will be passed back to the user mapping provider module's +__init__ method during construction.
    • +
    +
  • +
  • get_remote_user_id(self, userinfo) +
      +
    • Arguments: +
        +
      • userinfo - A authlib.oidc.core.claims.UserInfo object to extract user +information from.
      • +
      +
    • +
    • This method must return a string, which is the unique, immutable identifier +for the user. Commonly the sub claim of the response.
    • +
    +
  • +
  • map_user_attributes(self, userinfo, token, failures) +
      +
    • This method must be async.
    • +
    • Arguments: +
        +
      • userinfo - A authlib.oidc.core.claims.UserInfo object to extract user +information from.
      • +
      • token - A dictionary which includes information necessary to make +further requests to the OpenID provider.
      • +
      • failures - An int that represents the amount of times the returned +mxid localpart mapping has failed. This should be used +to create a deduplicated mxid localpart which should be +returned instead. For example, if this method returns +john.doe as the value of localpart in the returned +dict, and that is already taken on the homeserver, this +method will be called again with the same parameters but +with failures=1. The method should then return a different +localpart value, such as john.doe1.
      • +
      +
    • +
    • Returns a dictionary with two keys: +
        +
      • localpart: A string, used to generate the Matrix ID. If this is +None, the user is prompted to pick their own username. This is only used +during a user's first login. Once a localpart has been associated with a +remote user ID (see get_remote_user_id) it cannot be updated.
      • +
      • displayname: An optional string, the display name for the user.
      • +
      +
    • +
    +
  • +
  • get_extra_attributes(self, userinfo, token) +
      +
    • +

      This method must be async.

      +
    • +
    • +

      Arguments:

      +
        +
      • userinfo - A authlib.oidc.core.claims.UserInfo object to extract user +information from.
      • +
      • token - A dictionary which includes information necessary to make +further requests to the OpenID provider.
      • +
      +
    • +
    • +

      Returns a dictionary that is suitable to be serialized to JSON. This +will be returned as part of the response during a successful login.

      +

      Note that care should be taken to not overwrite any of the parameters +usually returned as part of the login response.

      +
    • +
    +
  • +
+

Default OpenID Mapping Provider

+

Synapse has a built-in OpenID mapping provider if a custom provider isn't +specified in the config. It is located at +synapse.handlers.oidc.JinjaOidcMappingProvider.

+

SAML Mapping Providers

+

The SAML mapping provider can be customized by editing the +saml2_config.user_mapping_provider.module config option.

+

saml2_config.user_mapping_provider.config allows you to provide custom +configuration options to the module. Check with the module's documentation for +what options it provides (if any). The options listed by default are for the +user mapping provider built in to Synapse. If using a custom module, you should +comment these options out and use those specified by the module instead.

+

Building a Custom SAML Mapping Provider

+

A custom mapping provider must specify the following methods:

+
    +
  • __init__(self, parsed_config, module_api) +
      +
    • Arguments: +
        +
      • parsed_config - A configuration object that is the return value of the +parse_config method. You should set any configuration options needed by +the module here.
      • +
      • module_api - a synapse.module_api.ModuleApi object which provides the +stable API available for extension modules.
      • +
      +
    • +
    +
  • +
  • parse_config(config) +
      +
    • This method should have the @staticmethod decoration.
    • +
    • Arguments: +
        +
      • config - A dict representing the parsed content of the +saml_config.user_mapping_provider.config homeserver config option. +Runs on homeserver startup. Providers should extract and validate +any option values they need here.
      • +
      +
    • +
    • Whatever is returned will be passed back to the user mapping provider module's +__init__ method during construction.
    • +
    +
  • +
  • get_saml_attributes(config) +
      +
    • This method should have the @staticmethod decoration.
    • +
    • Arguments: +
        +
      • config - A object resulting from a call to parse_config.
      • +
      +
    • +
    • Returns a tuple of two sets. The first set equates to the SAML auth +response attributes that are required for the module to function, whereas +the second set consists of those attributes which can be used if available, +but are not necessary.
    • +
    +
  • +
  • get_remote_user_id(self, saml_response, client_redirect_url) +
      +
    • Arguments: +
        +
      • saml_response - A saml2.response.AuthnResponse object to extract user +information from.
      • +
      • client_redirect_url - A string, the URL that the client will be +redirected to.
      • +
      +
    • +
    • This method must return a string, which is the unique, immutable identifier +for the user. Commonly the uid claim of the response.
    • +
    +
  • +
  • saml_response_to_user_attributes(self, saml_response, failures, client_redirect_url) +
      +
    • +

      Arguments:

      +
        +
      • saml_response - A saml2.response.AuthnResponse object to extract user +information from.
      • +
      • failures - An int that represents the amount of times the returned +mxid localpart mapping has failed. This should be used +to create a deduplicated mxid localpart which should be +returned instead. For example, if this method returns +john.doe as the value of mxid_localpart in the returned +dict, and that is already taken on the homeserver, this +method will be called again with the same parameters but +with failures=1. The method should then return a different +mxid_localpart value, such as john.doe1.
      • +
      • client_redirect_url - A string, the URL that the client will be +redirected to.
      • +
      +
    • +
    • +

      This method must return a dictionary, which will then be used by Synapse +to build a new user. The following keys are allowed:

      +
        +
      • mxid_localpart - A string, the mxid localpart of the new user. If this is +None, the user is prompted to pick their own username. This is only used +during a user's first login. Once a localpart has been associated with a +remote user ID (see get_remote_user_id) it cannot be updated.
      • +
      • displayname - The displayname of the new user. If not provided, will default to +the value of mxid_localpart.
      • +
      • emails - A list of emails for the new user. If not provided, will +default to an empty list.
      • +
      +

      Alternatively it can raise a synapse.api.errors.RedirectException to +redirect the user to another page. This is useful to prompt the user for +additional information, e.g. if you want them to provide their own username. +It is the responsibility of the mapping provider to either redirect back +to client_redirect_url (including any additional information) or to +complete registration using methods from the ModuleApi.

      +
    • +
    +
  • +
+

Default SAML Mapping Provider

+

Synapse has a built-in SAML mapping provider if a custom provider isn't +specified in the config. It is located at +synapse.handlers.saml.DefaultSamlMappingProvider.

+

Password auth provider modules

+

Password auth providers offer a way for server administrators to +integrate their Synapse installation with an existing authentication +system.

+

A password auth provider is a Python class which is dynamically loaded +into Synapse, and provides a number of methods by which it can integrate +with the authentication system.

+

This document serves as a reference for those looking to implement their +own password auth providers. Additionally, here is a list of known +password auth provider module implementations:

+ +

Required methods

+

Password auth provider classes must provide the following methods:

+
    +
  • +

    parse_config(config) +This method is passed the config object for this module from the +homeserver configuration file.

    +

    It should perform any appropriate sanity checks on the provided +configuration, and return an object which is then passed into +__init__.

    +

    This method should have the @staticmethod decoration.

    +
  • +
  • +

    __init__(self, config, account_handler)

    +

    The constructor is passed the config object returned by +parse_config, and a synapse.module_api.ModuleApi object which +allows the password provider to check if accounts exist and/or create +new ones.

    +
  • +
+

Optional methods

+

Password auth provider classes may optionally provide the following methods:

+
    +
  • +

    get_db_schema_files(self)

    +

    This method, if implemented, should return an Iterable of +(name, stream) pairs of database schema files. Each file is applied +in turn at initialisation, and a record is then made in the database +so that it is not re-applied on the next start.

    +
  • +
  • +

    get_supported_login_types(self)

    +

    This method, if implemented, should return a dict mapping from a +login type identifier (such as m.login.password) to an iterable +giving the fields which must be provided by the user in the submission +to the /login API. +These fields are passed in the login_dict dictionary to check_auth.

    +

    For example, if a password auth provider wants to implement a custom +login type of com.example.custom_login, where the client is expected +to pass the fields secret1 and secret2, the provider should +implement this method and return the following dict:

    +
    {"com.example.custom_login": ("secret1", "secret2")}
    +
    +
  • +
  • +

    check_auth(self, username, login_type, login_dict)

    +

    This method does the real work. If implemented, it +will be called for each login attempt where the login type matches one +of the keys returned by get_supported_login_types.

    +

    It is passed the (possibly unqualified) user field provided by the client, +the login type, and a dictionary of login secrets passed by the +client.

    +

    The method should return an Awaitable object, which resolves +to the canonical @localpart:domain user ID if authentication is +successful, and None if not.

    +

    Alternatively, the Awaitable can resolve to a (str, func) tuple, in +which case the second field is a callback which will be called with +the result from the /login call (including access_token, +device_id, etc.)

    +
  • +
  • +

    check_3pid_auth(self, medium, address, password)

    +

    This method, if implemented, is called when a user attempts to +register or log in with a third party identifier, such as email. It is +passed the medium (ex. "email"), an address (ex. +"jdoe@example.com") and the user's password.

    +

    The method should return an Awaitable object, which resolves +to a str containing the user's (canonical) User id if +authentication was successful, and None if not.

    +

    As with check_auth, the Awaitable may alternatively resolve to a +(user_id, callback) tuple.

    +
  • +
  • +

    check_password(self, user_id, password)

    +

    This method provides a simpler interface than +get_supported_login_types and check_auth for password auth +providers that just want to provide a mechanism for validating +m.login.password logins.

    +

    If implemented, it will be called to check logins with an +m.login.password login type. It is passed a qualified +@localpart:domain user id, and the password provided by the user.

    +

    The method should return an Awaitable object, which resolves +to True if authentication is successful, and False if not.

    +
  • +
  • +

    on_logged_out(self, user_id, device_id, access_token)

    +

    This method, if implemented, is called when a user logs out. It is +passed the qualified user ID, the ID of the deactivated device (if +any: access tokens are occasionally created without an associated +device ID), and the (now deactivated) access token.

    +

    It may return an Awaitable object; the logout request will +wait for the Awaitable to complete, but the result is ignored.

    +
  • +
+

JWT Login Type

+

Synapse comes with a non-standard login type to support +JSON Web Tokens. In general the +documentation for +the login endpoint +is still valid (and the mechanism works similarly to the +token based login).

+

To log in using a JSON Web Token, clients should submit a /login request as +follows:

+
{
+  "type": "org.matrix.login.jwt",
+  "token": "<jwt>"
+}
+
+

Note that the login type of m.login.jwt is supported, but is deprecated. This +will be removed in a future version of Synapse.

+

The token field should include the JSON web token with the following claims:

+
    +
  • The sub (subject) claim is required and should encode the local part of the +user ID.
  • +
  • The expiration time (exp), not before time (nbf), and issued at (iat) +claims are optional, but validated if present.
  • +
  • The issuer (iss) claim is optional, but required and validated if configured.
  • +
  • The audience (aud) claim is optional, but required and validated if configured. +Providing the audience claim when not configured will cause validation to fail.
  • +
+

In the case that the token is not valid, the homeserver must respond with +403 Forbidden and an error code of M_FORBIDDEN.

+

As with other login types, there are additional fields (e.g. device_id and +initial_device_display_name) which can be included in the above request.

+

Preparing Synapse

+

The JSON Web Token integration in Synapse uses the +PyJWT library, which must be installed +as follows:

+
    +
  • +

    The relevant libraries are included in the Docker images and Debian packages +provided by matrix.org so no further action is needed.

    +
  • +
  • +

    If you installed Synapse into a virtualenv, run /path/to/env/bin/pip install synapse[pyjwt] to install the necessary dependencies.

    +
  • +
  • +

    For other installation mechanisms, see the documentation provided by the +maintainer.

    +
  • +
+

To enable the JSON web token integration, you should then add an jwt_config section +to your configuration file (or uncomment the enabled: true line in the +existing section). See sample_config.yaml for some +sample settings.

+

How to test JWT as a developer

+

Although JSON Web Tokens are typically generated from an external server, the +examples below use PyJWT directly.

+
    +
  1. +

    Configure Synapse with JWT logins, note that this example uses a pre-shared +secret and an algorithm of HS256:

    +
    jwt_config:
    +    enabled: true
    +    secret: "my-secret-token"
    +    algorithm: "HS256"
    +
    +
  2. +
  3. +

    Generate a JSON web token:

    +
    $ pyjwt --key=my-secret-token --alg=HS256 encode sub=test-user
    +eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.Ag71GT8v01UO3w80aqRPTeuVPBIBZkYhNTJJ-_-zQIc
    +
    +
  4. +
  5. +

    Query for the login types and ensure org.matrix.login.jwt is there:

    +
    curl http://localhost:8080/_matrix/client/r0/login
    +
    +
  6. +
  7. +

    Login used the generated JSON web token from above:

    +
    $ curl http://localhost:8082/_matrix/client/r0/login -X POST \
    +    --data '{"type":"org.matrix.login.jwt","token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.Ag71GT8v01UO3w80aqRPTeuVPBIBZkYhNTJJ-_-zQIc"}'
    +{
    +    "access_token": "<access token>",
    +    "device_id": "ACBDEFGHI",
    +    "home_server": "localhost:8080",
    +    "user_id": "@test-user:localhost:8480"
    +}
    +
    +
  8. +
+

You should now be able to use the returned access token to query the client API.

+

Overview

+

A captcha can be enabled on your homeserver to help prevent bots from registering +accounts. Synapse currently uses Google's reCAPTCHA service which requires API keys +from Google.

+

Getting API keys

+
    +
  1. Create a new site at https://www.google.com/recaptcha/admin/create
  2. +
  3. Set the label to anything you want
  4. +
  5. Set the type to reCAPTCHA v2 using the "I'm not a robot" Checkbox option. +This is the only type of captcha that works with Synapse.
  6. +
  7. Add the public hostname for your server, as set in public_baseurl +in homeserver.yaml, to the list of authorized domains. If you have not set +public_baseurl, use server_name.
  8. +
  9. Agree to the terms of service and submit.
  10. +
  11. Copy your site key and secret key and add them to your homeserver.yaml +configuration file +
    recaptcha_public_key: YOUR_SITE_KEY
    +recaptcha_private_key: YOUR_SECRET_KEY
    +
    +
  12. +
  13. Enable the CAPTCHA for new registrations +
    enable_registration_captcha: true
    +
    +
  14. +
  15. Go to the settings page for the CAPTCHA you just created
  16. +
  17. Uncheck the "Verify the origin of reCAPTCHA solutions" checkbox so that the +captcha can be displayed in any client. If you do not disable this option then you +must specify the domains of every client that is allowed to display the CAPTCHA.
  18. +
+

Configuring IP used for auth

+

The reCAPTCHA API requires that the IP address of the user who solved the +CAPTCHA is sent. If the client is connecting through a proxy or load balancer, +it may be required to use the X-Forwarded-For (XFF) header instead of the origin +IP address. This can be configured using the x_forwarded directive in the +listeners section of the homeserver.yaml configuration file.

+

Registering an Application Service

+

The registration of new application services depends on the homeserver used. +In synapse, you need to create a new configuration file for your AS and add it +to the list specified under the app_service_config_files config +option in your synapse config.

+

For example:

+
app_service_config_files:
+- /home/matrix/.synapse/<your-AS>.yaml
+
+

The format of the AS configuration file is as follows:

+
url: <base url of AS>
+as_token: <token AS will add to requests to HS>
+hs_token: <token HS will add to requests to AS>
+sender_localpart: <localpart of AS user>
+namespaces:
+  users:  # List of users we're interested in
+    - exclusive: <bool>
+      regex: <regex>
+      group_id: <group>
+    - ...
+  aliases: []  # List of aliases we're interested in
+  rooms: [] # List of room ids we're interested in
+
+

exclusive: If enabled, only this application service is allowed to register users in its namespace(s). +group_id: All users of this application service are dynamically joined to this group. This is useful for e.g user organisation or flairs.

+

See the spec for further details on how application services work.

+

Server Notices

+

'Server Notices' are a new feature introduced in Synapse 0.30. They provide a +channel whereby server administrators can send messages to users on the server.

+

They are used as part of communication of the server polices(see +consent_tracking.md), however the intention is that +they may also find a use for features such as "Message of the day".

+

This is a feature specific to Synapse, but it uses standard Matrix +communication mechanisms, so should work with any Matrix client.

+

User experience

+

When the user is first sent a server notice, they will get an invitation to a +room (typically called 'Server Notices', though this is configurable in +homeserver.yaml). They will be unable to reject this invitation - +attempts to do so will receive an error.

+

Once they accept the invitation, they will see the notice message in the room +history; it will appear to have come from the 'server notices user' (see +below).

+

The user is prevented from sending any messages in this room by the power +levels.

+

Having joined the room, the user can leave the room if they want. Subsequent +server notices will then cause a new room to be created.

+

Synapse configuration

+

Server notices come from a specific user id on the server. Server +administrators are free to choose the user id - something like server is +suggested, meaning the notices will come from +@server:<your_server_name>. Once the Server Notices user is configured, that +user id becomes a special, privileged user, so administrators should ensure +that it is not already allocated.

+

In order to support server notices, it is necessary to add some configuration +to the homeserver.yaml file. In particular, you should add a server_notices +section, which should look like this:

+
server_notices:
+   system_mxid_localpart: server
+   system_mxid_display_name: "Server Notices"
+   system_mxid_avatar_url: "mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ"
+   room_name: "Server Notices"
+
+

The only compulsory setting is system_mxid_localpart, which defines the user +id of the Server Notices user, as above. room_name defines the name of the +room which will be created.

+

system_mxid_display_name and system_mxid_avatar_url can be used to set the +displayname and avatar of the Server Notices user.

+

Sending notices

+

To send server notices to users you can use the +admin_api.

+

Support in Synapse for tracking agreement to server terms and conditions

+

Synapse 0.30 introduces support for tracking whether users have agreed to the +terms and conditions set by the administrator of a server - and blocking access +to the server until they have.

+

There are several parts to this functionality; each requires some specific +configuration in homeserver.yaml to be enabled.

+

Note that various parts of the configuation and this document refer to the +"privacy policy": agreement with a privacy policy is one particular use of this +feature, but of course adminstrators can specify other terms and conditions +unrelated to "privacy" per se.

+

Collecting policy agreement from a user

+

Synapse can be configured to serve the user a simple policy form with an +"accept" button. Clicking "Accept" records the user's acceptance in the +database and shows a success page.

+

To enable this, first create templates for the policy and success pages. +These should be stored on the local filesystem.

+

These templates use the Jinja2 templating language, +and docs/privacy_policy_templates gives +examples of the sort of thing that can be done.

+

Note that the templates must be stored under a name giving the language of the +template - currently this must always be en (for "English"); +internationalisation support is intended for the future.

+

The template for the policy itself should be versioned and named according to +the version: for example 1.0.html. The version of the policy which the user +has agreed to is stored in the database.

+

Once the templates are in place, make the following changes to homeserver.yaml:

+
    +
  1. +

    Add a user_consent section, which should look like:

    +
    user_consent:
    +  template_dir: privacy_policy_templates
    +  version: 1.0
    +
    +

    template_dir points to the directory containing the policy +templates. version defines the version of the policy which will be served +to the user. In the example above, Synapse will serve +privacy_policy_templates/en/1.0.html.

    +
  2. +
  3. +

    Add a form_secret setting at the top level:

    +
    form_secret: "<unique secret>"
    +
    +

    This should be set to an arbitrary secret string (try pwgen -y 30 to +generate suitable secrets).

    +

    More on what this is used for below.

    +
  4. +
  5. +

    Add consent wherever the client resource is currently enabled in the +listeners configuration. For example:

    +
    listeners:
    +  - port: 8008
    +    resources:
    +      - names:
    +        - client
    +        - consent
    +
    +
  6. +
+

Finally, ensure that jinja2 is installed. If you are using a virtualenv, this +should be a matter of pip install Jinja2. On debian, try apt-get install python-jinja2.

+

Once this is complete, and the server has been restarted, try visiting +https://<server>/_matrix/consent. If correctly configured, this should give +an error "Missing string query parameter 'u'". It is now possible to manually +construct URIs where users can give their consent.

+ +
    +
  1. +

    Add the following to your configuration:

    +
    user_consent:
    +  require_at_registration: true
    +  policy_name: "Privacy Policy" # or whatever you'd like to call the policy
    +
    +
  2. +
  3. +

    In your consent templates, make use of the public_version variable to +see if an unauthenticated user is viewing the page. This is typically +wrapped around the form that would be used to actually agree to the document:

    +
    {% if not public_version %}
    +  <!-- The variables used here are only provided when the 'u' param is given to the homeserver -->
    +  <form method="post" action="consent">
    +    <input type="hidden" name="v" value="{{version}}"/>
    +    <input type="hidden" name="u" value="{{user}}"/>
    +    <input type="hidden" name="h" value="{{userhmac}}"/>
    +    <input type="submit" value="Sure thing!"/>
    +  </form>
    +{% endif %}
    +
    +
  4. +
  5. +

    Restart Synapse to apply the changes.

    +
  6. +
+

Visiting https://<server>/_matrix/consent should now give you a view of the privacy +document. This is what users will be able to see when registering for accounts.

+ +

It may be useful to manually construct the "consent URI" for a given user - for +instance, in order to send them an email asking them to consent. To do this, +take the base https://<server>/_matrix/consent URL and add the following +query parameters:

+
    +
  • +

    u: the user id of the user. This can either be a full MXID +(@user:server.com) or just the localpart (user).

    +
  • +
  • +

    h: hex-encoded HMAC-SHA256 of u using the form_secret as a key. It is +possible to calculate this on the commandline with something like:

    +
    echo -n '<user>' | openssl sha256 -hmac '<form_secret>'
    +
    +

    This should result in a URI which looks something like: +https://<server>/_matrix/consent?u=<user>&h=68a152465a4d....

    +
  • +
+

Note that not providing a u parameter will be interpreted as wanting to view +the document from an unauthenticated perspective, such as prior to registration. +Therefore, the h parameter is not required in this scenario. To enable this +behaviour, set require_at_registration to true in your user_consent config.

+

Sending users a server notice asking them to agree to the policy

+

It is possible to configure Synapse to send a server +notice to anybody who has not yet agreed to the current +version of the policy. To do so:

+
    +
  • +

    ensure that the consent resource is configured, as in the previous section

    +
  • +
  • +

    ensure that server notices are configured, as in server_notices.md.

    +
  • +
  • +

    Add server_notice_content under user_consent in homeserver.yaml. For +example:

    +
    user_consent:
    +  server_notice_content:
    +    msgtype: m.text
    +    body: >-
    +      Please give your consent to the privacy policy at %(consent_uri)s.
    +
    +

    Synapse automatically replaces the placeholder %(consent_uri)s with the +consent uri for that user.

    +
  • +
  • +

    ensure that public_baseurl is set in homeserver.yaml, and gives the base +URI that clients use to connect to the server. (It is used to construct +consent_uri in the server notice.)

    +
  • +
+

Blocking users from using the server until they agree to the policy

+

Synapse can be configured to block any attempts to join rooms or send messages +until the user has given their agreement to the policy. (Joining the server +notices room is exempted from this).

+

To enable this, add block_events_error under user_consent. For example:

+
user_consent:
+  block_events_error: >-
+    You can't send any messages until you consent to the privacy policy at
+    %(consent_uri)s.
+
+

Synapse automatically replaces the placeholder %(consent_uri)s with the +consent uri for that user.

+

ensure that public_baseurl is set in homeserver.yaml, and gives the base +URI that clients use to connect to the server. (It is used to construct +consent_uri in the error.)

+

URL Previews

+

Design notes on a URL previewing service for Matrix:

+

Options are:

+
    +
  1. Have an AS which listens for URLs, downloads them, and inserts an event that describes their metadata.
  2. +
+
    +
  • Pros: +
      +
    • Decouples the implementation entirely from Synapse.
    • +
    • Uses existing Matrix events & content repo to store the metadata.
    • +
    +
  • +
  • Cons: +
      +
    • Which AS should provide this service for a room, and why should you trust it?
    • +
    • Doesn't work well with E2E; you'd have to cut the AS into every room
    • +
    • the AS would end up subscribing to every room anyway.
    • +
    +
  • +
+
    +
  1. Have a generic preview API (nothing to do with Matrix) that provides a previewing service:
  2. +
+
    +
  • Pros: +
      +
    • Simple and flexible; can be used by any clients at any point
    • +
    +
  • +
  • Cons: +
      +
    • If each HS provides one of these independently, all the HSes in a room may needlessly DoS the target URI
    • +
    • We need somewhere to store the URL metadata rather than just using Matrix itself
    • +
    • We can't piggyback on matrix to distribute the metadata between HSes.
    • +
    +
  • +
+
    +
  1. Make the synapse of the sending user responsible for spidering the URL and inserting an event asynchronously which describes the metadata.
  2. +
+
    +
  • Pros: +
      +
    • Works transparently for all clients
    • +
    • Piggy-backs nicely on using Matrix for distributing the metadata.
    • +
    • No confusion as to which AS
    • +
    +
  • +
  • Cons: +
      +
    • Doesn't work with E2E
    • +
    • We might want to decouple the implementation of the spider from the HS, given spider behaviour can be quite complicated and evolve much more rapidly than the HS. It's more like a bot than a core part of the server.
    • +
    +
  • +
+
    +
  1. Make the sending client use the preview API and insert the event itself when successful.
  2. +
+
    +
  • Pros: +
      +
    • Works well with E2E
    • +
    • No custom server functionality
    • +
    • Lets the client customise the preview that they send (like on FB)
    • +
    +
  • +
  • Cons: +
      +
    • Entirely specific to the sending client, whereas it'd be nice if /any/ URL was correctly previewed if clients support it.
    • +
    +
  • +
+
    +
  1. Have the option of specifying a shared (centralised) previewing service used by a room, to avoid all the different HSes in the room DoSing the target.
  2. +
+

Best solution is probably a combination of both 2 and 4.

+
    +
  • Sending clients do their best to create and send a preview at the point of sending the message, perhaps delaying the message until the preview is computed? (This also lets the user validate the preview before sending)
  • +
  • Receiving clients have the option of going and creating their own preview if one doesn't arrive soon enough (or if the original sender didn't create one)
  • +
+

This is a bit magical though in that the preview could come from two entirely different sources - the sending HS or your local one. However, this can always be exposed to users: "Generate your own URL previews if none are available?"

+

This is tantamount also to senders calculating their own thumbnails for sending in advance of the main content - we are trusting the sender not to lie about the content in the thumbnail. Whereas currently thumbnails are calculated by the receiving homeserver to avoid this attack.

+

However, this kind of phishing attack does exist whether we let senders pick their thumbnails or not, in that a malicious sender can send normal text messages around the attachment claiming it to be legitimate. We could rely on (future) reputation/abuse management to punish users who phish (be it with bogus metadata or bogus descriptions). Bogus metadata is particularly bad though, especially if it's avoidable.

+

As a first cut, let's do #2 and have the receiver hit the API to calculate its own previews (as it does currently for image thumbnails). We can then extend/optimise this to option 4 as a special extra if needed.

+

API

+
GET /_matrix/media/r0/preview_url?url=http://wherever.com
+200 OK
+{
+    "og:type"        : "article"
+    "og:url"         : "https://twitter.com/matrixdotorg/status/684074366691356672"
+    "og:title"       : "Matrix on Twitter"
+    "og:image"       : "https://pbs.twimg.com/profile_images/500400952029888512/yI0qtFi7_400x400.png"
+    "og:description" : "“Synapse 0.12 is out! Lots of polishing, performance &amp;amp; bugfixes: /sync API, /r0 prefix, fulltext search, 3PID invites https://t.co/5alhXLLEGP”"
+    "og:site_name"   : "Twitter"
+}
+
+
    +
  • Downloads the URL +
      +
    • If HTML, just stores it in RAM and parses it for OG meta tags +
        +
      • Download any media OG meta tags to the media repo, and refer to them in the OG via mxc:// URIs.
      • +
      +
    • +
    • If a media filetype we know we can thumbnail: store it on disk, and hand it to the thumbnailer. Generate OG meta tags from the thumbnailer contents.
    • +
    • Otherwise, don't bother downloading further.
    • +
    +
  • +
+

User Directory API Implementation

+

The user directory is currently maintained based on the 'visible' users +on this particular server - i.e. ones which your account shares a room with, or +who are present in a publicly viewable room present on the server.

+

The directory info is stored in various tables, which can (typically after +DB corruption) get stale or out of sync. If this happens, for now the +solution to fix it is to execute the SQL here +and then restart synapse. This should then start a background task to +flush the current tables and regenerate the directory.

+

Message retention policies

+

Synapse admins can enable support for message retention policies on +their homeserver. Message retention policies exist at a room level, +follow the semantics described in +MSC1763, +and allow server and room admins to configure how long messages should +be kept in a homeserver's database before being purged from it. +Please note that, as this feature isn't part of the Matrix +specification yet, this implementation is to be considered as +experimental.

+

A message retention policy is mainly defined by its max_lifetime +parameter, which defines how long a message can be kept around after +it was sent to the room. If a room doesn't have a message retention +policy, and there's no default one for a given server, then no message +sent in that room is ever purged on that server.

+

MSC1763 also specifies semantics for a min_lifetime parameter which +defines the amount of time after which an event can get purged (after +it was sent to the room), but Synapse doesn't currently support it +beyond registering it.

+

Both max_lifetime and min_lifetime are optional parameters.

+

Note that message retention policies don't apply to state events.

+

Once an event reaches its expiry date (defined as the time it was sent +plus the value for max_lifetime in the room), two things happen:

+
    +
  • Synapse stops serving the event to clients via any endpoint.
  • +
  • The message gets picked up by the next purge job (see the "Purge jobs" +section) and is removed from Synapse's database.
  • +
+

Since purge jobs don't run continuously, this means that an event might +stay in a server's database for longer than the value for max_lifetime +in the room would allow, though hidden from clients.

+

Similarly, if a server (with support for message retention policies +enabled) receives from another server an event that should have been +purged according to its room's policy, then the receiving server will +process and store that event until it's picked up by the next purge job, +though it will always hide it from clients.

+

Synapse requires at least one message in each room, so it will never +delete the last message in a room. It will, however, hide it from +clients.

+

Server configuration

+

Support for this feature can be enabled and configured in the +retention section of the Synapse configuration file (see the +sample file).

+

To enable support for message retention policies, set the setting +enabled in this section to true.

+

Default policy

+

A default message retention policy is a policy defined in Synapse's +configuration that is used by Synapse for every room that doesn't have a +message retention policy configured in its state. This allows server +admins to ensure that messages are never kept indefinitely in a server's +database.

+

A default policy can be defined as such, in the retention section of +the configuration file:

+
  default_policy:
+    min_lifetime: 1d
+    max_lifetime: 1y
+
+

Here, min_lifetime and max_lifetime have the same meaning and level +of support as previously described. They can be expressed either as a +duration (using the units s (seconds), m (minutes), h (hours), +d (days), w (weeks) and y (years)) or as a number of milliseconds.

+

Purge jobs

+

Purge jobs are the jobs that Synapse runs in the background to purge +expired events from the database. They are only run if support for +message retention policies is enabled in the server's configuration. If +no configuration for purge jobs is configured by the server admin, +Synapse will use a default configuration, which is described in the +sample configuration file.

+

Some server admins might want a finer control on when events are removed +depending on an event's room's policy. This can be done by setting the +purge_jobs sub-section in the retention section of the configuration +file. An example of such configuration could be:

+
  purge_jobs:
+    - longest_max_lifetime: 3d
+      interval: 12h
+    - shortest_max_lifetime: 3d
+      longest_max_lifetime: 1w
+      interval: 1d
+    - shortest_max_lifetime: 1w
+      interval: 2d
+
+

In this example, we define three jobs:

+
    +
  • one that runs twice a day (every 12 hours) and purges events in rooms +which policy's max_lifetime is lower or equal to 3 days.
  • +
  • one that runs once a day and purges events in rooms which policy's +max_lifetime is between 3 days and a week.
  • +
  • one that runs once every 2 days and purges events in rooms which +policy's max_lifetime is greater than a week.
  • +
+

Note that this example is tailored to show different configurations and +features slightly more jobs than it's probably necessary (in practice, a +server admin would probably consider it better to replace the two last +jobs with one that runs once a day and handles rooms which which +policy's max_lifetime is greater than 3 days).

+

Keep in mind, when configuring these jobs, that a purge job can become +quite heavy on the server if it targets many rooms, therefore prefer +having jobs with a low interval that target a limited set of rooms. Also +make sure to include a job with no minimum and one with no maximum to +make sure your configuration handles every policy.

+

As previously mentioned in this documentation, while a purge job that +runs e.g. every day means that an expired event might stay in the +database for up to a day after its expiry, Synapse hides expired events +from clients as soon as they expire, so the event is not visible to +local users between its expiry date and the moment it gets purged from +the server's database.

+

Lifetime limits

+

Server admins can set limits on the values of max_lifetime to use when +purging old events in a room. These limits can be defined as such in the +retention section of the configuration file:

+
  allowed_lifetime_min: 1d
+  allowed_lifetime_max: 1y
+
+

The limits are considered when running purge jobs. If necessary, the +effective value of max_lifetime will be brought between +allowed_lifetime_min and allowed_lifetime_max (inclusive). +This means that, if the value of max_lifetime defined in the room's state +is lower than allowed_lifetime_min, the value of allowed_lifetime_min +will be used instead. Likewise, if the value of max_lifetime is higher +than allowed_lifetime_max, the value of allowed_lifetime_max will be +used instead.

+

In the example above, we ensure Synapse never deletes events that are less +than one day old, and that it always deletes events that are over a year +old.

+

If a default policy is set, and its max_lifetime value is lower than +allowed_lifetime_min or higher than allowed_lifetime_max, the same +process applies.

+

Both parameters are optional; if one is omitted Synapse won't use it to +adjust the effective value of max_lifetime.

+

Like other settings in this section, these parameters can be expressed +either as a duration or as a number of milliseconds.

+

Room configuration

+

To configure a room's message retention policy, a room's admin or +moderator needs to send a state event in that room with the type +m.room.retention and the following content:

+
{
+    "max_lifetime": ...
+}
+
+

In this event's content, the max_lifetime parameter has the same +meaning as previously described, and needs to be expressed in +milliseconds. The event's content can also include a min_lifetime +parameter, which has the same meaning and limited support as previously +described.

+

Note that over every server in the room, only the ones with support for +message retention policies will actually remove expired events. This +support is currently not enabled by default in Synapse.

+

Note on reclaiming disk space

+

While purge jobs actually delete data from the database, the disk space +used by the database might not decrease immediately on the database's +host. However, even though the database engine won't free up the disk +space, it will start writing new data into where the purged data was.

+

If you want to reclaim the freed disk space anyway and return it to the +operating system, the server admin needs to run VACUUM FULL; (or +VACUUM; for SQLite databases) on Synapse's database (see the related +PostgreSQL documentation).

+

Handling spam in Synapse

+

Synapse has support to customize spam checking behavior. It can plug into a +variety of events and affect how they are presented to users on your homeserver.

+

The spam checking behavior is implemented as a Python class, which must be +able to be imported by the running Synapse.

+

Python spam checker class

+

The Python class is instantiated with two objects:

+
    +
  • Any configuration (see below).
  • +
  • An instance of synapse.module_api.ModuleApi.
  • +
+

It then implements methods which return a boolean to alter behavior in Synapse. +All the methods must be defined.

+

There's a generic method for checking every event (check_event_for_spam), as +well as some specific methods:

+
    +
  • user_may_invite
  • +
  • user_may_create_room
  • +
  • user_may_create_room_alias
  • +
  • user_may_publish_room
  • +
  • check_username_for_spam
  • +
  • check_registration_for_spam
  • +
  • check_media_file_for_spam
  • +
+

The details of each of these methods (as well as their inputs and outputs) +are documented in the synapse.events.spamcheck.SpamChecker class.

+

The ModuleApi class provides a way for the custom spam checker class to +call back into the homeserver internals.

+

Additionally, a parse_config method is mandatory and receives the plugin config +dictionary. After parsing, It must return an object which will be +passed to __init__ later.

+

Example

+
from synapse.spam_checker_api import RegistrationBehaviour
+
+class ExampleSpamChecker:
+    def __init__(self, config, api):
+        self.config = config
+        self.api = api
+
+    @staticmethod
+    def parse_config(config):
+        return config
+        
+    async def check_event_for_spam(self, foo):
+        return False  # allow all events
+
+    async def user_may_invite(self, inviter_userid, invitee_userid, room_id):
+        return True  # allow all invites
+
+    async def user_may_create_room(self, userid):
+        return True  # allow all room creations
+
+    async def user_may_create_room_alias(self, userid, room_alias):
+        return True  # allow all room aliases
+
+    async def user_may_publish_room(self, userid, room_id):
+        return True  # allow publishing of all rooms
+
+    async def check_username_for_spam(self, user_profile):
+        return False  # allow all usernames
+
+    async def check_registration_for_spam(
+        self,
+        email_threepid,
+        username,
+        request_info,
+        auth_provider_id,
+    ):
+        return RegistrationBehaviour.ALLOW  # allow all registrations
+
+    async def check_media_file_for_spam(self, file_wrapper, file_info):
+        return False  # allow all media
+
+

Configuration

+

Modify the spam_checker section of your homeserver.yaml in the following +manner:

+

Create a list entry with the keys module and config.

+
    +
  • +

    module should point to the fully qualified Python class that implements your +custom logic, e.g. my_module.ExampleSpamChecker.

    +
  • +
  • +

    config is a dictionary that gets passed to the spam checker class.

    +
  • +
+

Example

+

This section might look like:

+
spam_checker:
+  - module: my_module.ExampleSpamChecker
+    config:
+      # Enable or disable a specific option in ExampleSpamChecker.
+      my_custom_option: true
+
+

More spam checkers can be added in tandem by appending more items to the list. An +action is blocked when at least one of the configured spam checkers flags it.

+

Examples

+

The Mjolnir project is a full fledged +example using the Synapse spam checking API, including a bot for dynamic +configuration.

+

Presence Router Module

+

Synapse supports configuring a module that can specify additional users +(local or remote) to should receive certain presence updates from local +users.

+

Note that routing presence via Application Service transactions is not +currently supported.

+

The presence routing module is implemented as a Python class, which will +be imported by the running Synapse.

+

Python Presence Router Class

+

The Python class is instantiated with two objects:

+
    +
  • A configuration object of some type (see below).
  • +
  • An instance of synapse.module_api.ModuleApi.
  • +
+

It then implements methods related to presence routing.

+

Note that one method of ModuleApi that may be useful is:

+
async def ModuleApi.send_local_online_presence_to(users: Iterable[str]) -> None
+
+

which can be given a list of local or remote MXIDs to broadcast known, online user +presence to (for those users that the receiving user is considered interested in). +It does not include state for users who are currently offline, and it can only be +called on workers that support sending federation. Additionally, this method must +only be called from the process that has been configured to write to the +the presence stream. +By default, this is the main process, but another worker can be configured to do +so.

+

Module structure

+

Below is a list of possible methods that can be implemented, and whether they are +required.

+

parse_config

+
def parse_config(config_dict: dict) -> Any
+
+

Required. A static method that is passed a dictionary of config options, and +should return a validated config object. This method is described further in +Configuration.

+

get_users_for_states

+
async def get_users_for_states(
+    self,
+    state_updates: Iterable[UserPresenceState],
+) -> Dict[str, Set[UserPresenceState]]:
+
+

Required. An asynchronous method that is passed an iterable of user presence +state. This method can determine whether a given presence update should be sent to certain +users. It does this by returning a dictionary with keys representing local or remote +Matrix User IDs, and values being a python set +of synapse.handlers.presence.UserPresenceState instances.

+

Synapse will then attempt to send the specified presence updates to each user when +possible.

+

get_interested_users

+
async def get_interested_users(self, user_id: str) -> Union[Set[str], str]
+
+

Required. An asynchronous method that is passed a single Matrix User ID. This +method is expected to return the users that the passed in user may be interested in the +presence of. Returned users may be local or remote. The presence routed as a result of +what this method returns is sent in addition to the updates already sent between users +that share a room together. Presence updates are deduplicated.

+

This method should return a python set of Matrix User IDs, or the object +synapse.events.presence_router.PresenceRouter.ALL_USERS to indicate that the passed +user should receive presence information for all known users.

+

For clarity, if the user @alice:example.org is passed to this method, and the Set +{"@bob:example.com", "@charlie:somewhere.org"} is returned, this signifies that Alice +should receive presence updates sent by Bob and Charlie, regardless of whether these +users share a room.

+

Example

+

Below is an example implementation of a presence router class.

+
from typing import Dict, Iterable, Set, Union
+from synapse.events.presence_router import PresenceRouter
+from synapse.handlers.presence import UserPresenceState
+from synapse.module_api import ModuleApi
+
+class PresenceRouterConfig:
+    def __init__(self):
+        # Config options with their defaults
+        # A list of users to always send all user presence updates to
+        self.always_send_to_users = []  # type: List[str]
+        
+        # A list of users to ignore presence updates for. Does not affect
+        # shared-room presence relationships
+        self.blacklisted_users = []  # type: List[str]
+
+class ExamplePresenceRouter:
+    """An example implementation of synapse.presence_router.PresenceRouter.
+    Supports routing all presence to a configured set of users, or a subset
+    of presence from certain users to members of certain rooms.
+
+    Args:
+        config: A configuration object.
+        module_api: An instance of Synapse's ModuleApi.
+    """
+    def __init__(self, config: PresenceRouterConfig, module_api: ModuleApi):
+        self._config = config
+        self._module_api = module_api
+
+    @staticmethod
+    def parse_config(config_dict: dict) -> PresenceRouterConfig:
+        """Parse a configuration dictionary from the homeserver config, do
+        some validation and return a typed PresenceRouterConfig.
+
+        Args:
+            config_dict: The configuration dictionary.
+
+        Returns:
+            A validated config object.
+        """
+        # Initialise a typed config object
+        config = PresenceRouterConfig()
+        always_send_to_users = config_dict.get("always_send_to_users")
+        blacklisted_users = config_dict.get("blacklisted_users")
+
+        # Do some validation of config options... otherwise raise a
+        # synapse.config.ConfigError.
+        config.always_send_to_users = always_send_to_users
+        config.blacklisted_users = blacklisted_users
+
+        return config
+
+    async def get_users_for_states(
+        self,
+        state_updates: Iterable[UserPresenceState],
+    ) -> Dict[str, Set[UserPresenceState]]:
+        """Given an iterable of user presence updates, determine where each one
+        needs to go. Returned results will not affect presence updates that are
+        sent between users who share a room.
+
+        Args:
+            state_updates: An iterable of user presence state updates.
+
+        Returns:
+          A dictionary of user_id -> set of UserPresenceState that the user should 
+          receive.
+        """
+        destination_users = {}  # type: Dict[str, Set[UserPresenceState]
+
+        # Ignore any updates for blacklisted users
+        desired_updates = set()
+        for update in state_updates:
+            if update.state_key not in self._config.blacklisted_users:
+                desired_updates.add(update)
+
+        # Send all presence updates to specific users
+        for user_id in self._config.always_send_to_users:
+            destination_users[user_id] = desired_updates
+
+        return destination_users
+
+    async def get_interested_users(
+        self,
+        user_id: str,
+    ) -> Union[Set[str], PresenceRouter.ALL_USERS]:
+        """
+        Retrieve a list of users that `user_id` is interested in receiving the
+        presence of. This will be in addition to those they share a room with.
+        Optionally, the object PresenceRouter.ALL_USERS can be returned to indicate
+        that this user should receive all incoming local and remote presence updates.
+
+        Note that this method will only be called for local users.
+
+        Args:
+          user_id: A user requesting presence updates.
+
+        Returns:
+          A set of user IDs to return additional presence updates for, or
+          PresenceRouter.ALL_USERS to return presence updates for all other users.
+        """
+        if user_id in self._config.always_send_to_users:
+            return PresenceRouter.ALL_USERS
+
+        return set()
+
+

A note on get_users_for_states and get_interested_users

+

Both of these methods are effectively two different sides of the same coin. The logic +regarding which users should receive updates for other users should be the same +between them.

+

get_users_for_states is called when presence updates come in from either federation +or local users, and is used to either direct local presence to remote users, or to +wake up the sync streams of local users to collect remote presence.

+

In contrast, get_interested_users is used to determine the users that presence should +be fetched for when a local user is syncing. This presence is then retrieved, before +being fed through get_users_for_states once again, with only the syncing user's +routing information pulled from the resulting dictionary.

+

Their routing logic should thus line up, else you may run into unintended behaviour.

+

Configuration

+

Once you've crafted your module and installed it into the same Python environment as +Synapse, amend your homeserver config file with the following.

+
presence:
+  routing_module:
+    module: my_module.ExamplePresenceRouter
+    config:
+      # Any configuration options for your module. The below is an example.
+      # of setting options for ExamplePresenceRouter.
+      always_send_to_users: ["@presence_gobbler:example.org"]
+      blacklisted_users:
+        - "@alice:example.com"
+        - "@bob:example.com"
+      ...
+
+

The contents of config will be passed as a Python dictionary to the static +parse_config method of your class. The object returned by this method will +then be passed to the __init__ method of your module as config.

+

Scaling synapse via workers

+

For small instances it recommended to run Synapse in the default monolith mode. +For larger instances where performance is a concern it can be helpful to split +out functionality into multiple separate python processes. These processes are +called 'workers', and are (eventually) intended to scale horizontally +independently.

+

Synapse's worker support is under active development and subject to change as +we attempt to rapidly scale ever larger Synapse instances. However we are +documenting it here to help admins needing a highly scalable Synapse instance +similar to the one running matrix.org.

+

All processes continue to share the same database instance, and as such, +workers only work with PostgreSQL-based Synapse deployments. SQLite should only +be used for demo purposes and any admin considering workers should already be +running PostgreSQL.

+

See also https://matrix.org/blog/2020/11/03/how-we-fixed-synapses-scalability +for a higher level overview.

+

Main process/worker communication

+

The processes communicate with each other via a Synapse-specific protocol called +'replication' (analogous to MySQL- or Postgres-style database replication) which +feeds streams of newly written data between processes so they can be kept in +sync with the database state.

+

When configured to do so, Synapse uses a +Redis pub/sub channel to send the replication +stream between all configured Synapse processes. Additionally, processes may +make HTTP requests to each other, primarily for operations which need to wait +for a reply ─ such as sending an event.

+

Redis support was added in v1.13.0 with it becoming the recommended method in +v1.18.0. It replaced the old direct TCP connections (which is deprecated as of +v1.18.0) to the main process. With Redis, rather than all the workers connecting +to the main process, all the workers and the main process connect to Redis, +which relays replication commands between processes. This can give a significant +cpu saving on the main process and will be a prerequisite for upcoming +performance improvements.

+

If Redis support is enabled Synapse will use it as a shared cache, as well as a +pub/sub mechanism.

+

See the Architectural diagram section at the end for +a visualisation of what this looks like.

+

Setting up workers

+

A Redis server is required to manage the communication between the processes. +The Redis server should be installed following the normal procedure for your +distribution (e.g. apt install redis-server on Debian). It is safe to use an +existing Redis deployment if you have one.

+

Once installed, check that Redis is running and accessible from the host running +Synapse, for example by executing echo PING | nc -q1 localhost 6379 and seeing +a response of +PONG.

+

The appropriate dependencies must also be installed for Synapse. If using a +virtualenv, these can be installed with:

+
pip install "matrix-synapse[redis]"
+
+

Note that these dependencies are included when synapse is installed with pip install matrix-synapse[all]. They are also included in the debian packages from +matrix.org and in the docker images at +https://hub.docker.com/r/matrixdotorg/synapse/.

+

To make effective use of the workers, you will need to configure an HTTP +reverse-proxy such as nginx or haproxy, which will direct incoming requests to +the correct worker, or to the main synapse instance. See +reverse_proxy.md for information on setting up a reverse +proxy.

+

When using workers, each worker process has its own configuration file which +contains settings specific to that worker, such as the HTTP listener that it +provides (if any), logging configuration, etc.

+

Normally, the worker processes are configured to read from a shared +configuration file as well as the worker-specific configuration files. This +makes it easier to keep common configuration settings synchronised across all +the processes.

+

The main process is somewhat special in this respect: it does not normally +need its own configuration file and can take all of its configuration from the +shared configuration file.

+

Shared configuration

+

Normally, only a couple of changes are needed to make an existing configuration +file suitable for use with workers. First, you need to enable an "HTTP replication +listener" for the main process; and secondly, you need to enable redis-based +replication. Optionally, a shared secret can be used to authenticate HTTP +traffic between workers. For example:

+
# extend the existing `listeners` section. This defines the ports that the
+# main process will listen on.
+listeners:
+  # The HTTP replication port
+  - port: 9093
+    bind_address: '127.0.0.1'
+    type: http
+    resources:
+     - names: [replication]
+
+# Add a random shared secret to authenticate traffic.
+worker_replication_secret: ""
+
+redis:
+    enabled: true
+
+

See the sample config for the full documentation of each option.

+

Under no circumstances should the replication listener be exposed to the +public internet; it has no authentication and is unencrypted.

+

Worker configuration

+

In the config file for each worker, you must specify the type of worker +application (worker_app), and you should specify a unique name for the worker +(worker_name). The currently available worker applications are listed below. +You must also specify the HTTP replication endpoint that it should talk to on +the main synapse process. worker_replication_host should specify the host of +the main synapse and worker_replication_http_port should point to the HTTP +replication port. If the worker will handle HTTP requests then the +worker_listeners option should be set with a http listener, in the same way +as the listeners option in the shared config.

+

For example:

+
worker_app: synapse.app.generic_worker
+worker_name: worker1
+
+# The replication listener on the main synapse process.
+worker_replication_host: 127.0.0.1
+worker_replication_http_port: 9093
+
+worker_listeners:
+ - type: http
+   port: 8083
+   resources:
+     - names:
+       - client
+       - federation
+
+worker_log_config: /home/matrix/synapse/config/worker1_log_config.yaml
+
+

...is a full configuration for a generic worker instance, which will expose a +plain HTTP endpoint on port 8083 separately serving various endpoints, e.g. +/sync, which are listed below.

+

Obviously you should configure your reverse-proxy to route the relevant +endpoints to the worker (localhost:8083 in the above example).

+

Running Synapse with workers

+

Finally, you need to start your worker processes. This can be done with either +synctl or your distribution's preferred service manager such as systemd. We +recommend the use of systemd where available: for information on setting up +systemd to start synapse workers, see +systemd-with-workers. To use synctl, see +synctl_workers.md.

+

Available worker applications

+

synapse.app.generic_worker

+

This worker can handle API requests matching the following regular +expressions:

+
# Sync requests
+^/_matrix/client/(v2_alpha|r0)/sync$
+^/_matrix/client/(api/v1|v2_alpha|r0)/events$
+^/_matrix/client/(api/v1|r0)/initialSync$
+^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
+
+# Federation requests
+^/_matrix/federation/v1/event/
+^/_matrix/federation/v1/state/
+^/_matrix/federation/v1/state_ids/
+^/_matrix/federation/v1/backfill/
+^/_matrix/federation/v1/get_missing_events/
+^/_matrix/federation/v1/publicRooms
+^/_matrix/federation/v1/query/
+^/_matrix/federation/v1/make_join/
+^/_matrix/federation/v1/make_leave/
+^/_matrix/federation/v1/send_join/
+^/_matrix/federation/v2/send_join/
+^/_matrix/federation/v1/send_leave/
+^/_matrix/federation/v2/send_leave/
+^/_matrix/federation/v1/invite/
+^/_matrix/federation/v2/invite/
+^/_matrix/federation/v1/query_auth/
+^/_matrix/federation/v1/event_auth/
+^/_matrix/federation/v1/exchange_third_party_invite/
+^/_matrix/federation/v1/user/devices/
+^/_matrix/federation/v1/get_groups_publicised$
+^/_matrix/key/v2/query
+
+# Inbound federation transaction request
+^/_matrix/federation/v1/send/
+
+# Client API requests
+^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
+^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
+^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
+^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
+^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
+^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
+^/_matrix/client/(api/v1|r0|unstable)/devices$
+^/_matrix/client/(api/v1|r0|unstable)/keys/query$
+^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
+^/_matrix/client/versions$
+^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
+^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
+^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
+^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
+^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/event/
+^/_matrix/client/(api/v1|r0|unstable)/joined_rooms$
+^/_matrix/client/(api/v1|r0|unstable)/search$
+
+# Registration/login requests
+^/_matrix/client/(api/v1|r0|unstable)/login$
+^/_matrix/client/(r0|unstable)/register$
+
+# Event sending requests
+^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact
+^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
+^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
+^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
+^/_matrix/client/(api/v1|r0|unstable)/join/
+^/_matrix/client/(api/v1|r0|unstable)/profile/
+
+

Additionally, the following REST endpoints can be handled for GET requests:

+
^/_matrix/federation/v1/groups/
+
+

Pagination requests can also be handled, but all requests for a given +room must be routed to the same instance. Additionally, care must be taken to +ensure that the purge history admin API is not used while pagination requests +for the room are in flight:

+
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
+
+

Additionally, the following endpoints should be included if Synapse is configured +to use SSO (you only need to include the ones for whichever SSO provider you're +using):

+
# for all SSO providers
+^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect
+^/_synapse/client/pick_idp$
+^/_synapse/client/pick_username
+^/_synapse/client/new_user_consent$
+^/_synapse/client/sso_register$
+
+# OpenID Connect requests.
+^/_synapse/client/oidc/callback$
+
+# SAML requests.
+^/_synapse/client/saml2/authn_response$
+
+# CAS requests.
+^/_matrix/client/(api/v1|r0|unstable)/login/cas/ticket$
+
+

Ensure that all SSO logins go to a single process. +For multiple workers not handling the SSO endpoints properly, see +#7530 and +#9427.

+

Note that a HTTP listener with client and federation resources must be +configured in the worker_listeners option in the worker config.

+

Load balancing

+

It is possible to run multiple instances of this worker app, with incoming requests +being load-balanced between them by the reverse-proxy. However, different endpoints +have different characteristics and so admins +may wish to run multiple groups of workers handling different endpoints so that +load balancing can be done in different ways.

+

For /sync and /initialSync requests it will be more efficient if all +requests from a particular user are routed to a single instance. Extracting a +user ID from the access token or Authorization header is currently left as an +exercise for the reader. Admins may additionally wish to separate out /sync +requests that have a since query parameter from those that don't (and +/initialSync), as requests that don't are known as "initial sync" that happens +when a user logs in on a new device and can be very resource intensive, so +isolating these requests will stop them from interfering with other users ongoing +syncs.

+

Federation and client requests can be balanced via simple round robin.

+

The inbound federation transaction request ^/_matrix/federation/v1/send/ +should be balanced by source IP so that transactions from the same remote server +go to the same process.

+

Registration/login requests can be handled separately purely to help ensure that +unexpected load doesn't affect new logins and sign ups.

+

Finally, event sending requests can be balanced by the room ID in the URI (or +the full URI, or even just round robin), the room ID is the path component after +/rooms/. If there is a large bridge connected that is sending or may send lots +of events, then a dedicated set of workers can be provisioned to limit the +effects of bursts of events from that bridge on events sent by normal users.

+

Stream writers

+

Additionally, there is experimental support for moving writing of specific +streams (such as events) off of the main process to a particular worker. (This +is only supported with Redis-based replication.)

+

Currently supported streams are events and typing.

+

To enable this, the worker must have a HTTP replication listener configured, +have a worker_name and be listed in the instance_map config. For example to +move event persistence off to a dedicated worker, the shared configuration would +include:

+
instance_map:
+    event_persister1:
+        host: localhost
+        port: 8034
+
+stream_writers:
+    events: event_persister1
+
+

The events stream also experimentally supports having multiple writers, where +work is sharded between them by room ID. Note that you must restart all worker +instances when adding or removing event persisters. An example stream_writers +configuration with multiple writers:

+
stream_writers:
+    events:
+        - event_persister1
+        - event_persister2
+
+

Background tasks

+

There is also experimental support for moving background tasks to a separate +worker. Background tasks are run periodically or started via replication. Exactly +which tasks are configured to run depends on your Synapse configuration (e.g. if +stats is enabled).

+

To enable this, the worker must have a worker_name and can be configured to run +background tasks. For example, to move background tasks to a dedicated worker, +the shared configuration would include:

+
run_background_tasks_on: background_worker
+
+

You might also wish to investigate the update_user_directory and +media_instance_running_background_jobs settings.

+

synapse.app.pusher

+

Handles sending push notifications to sygnal and email. Doesn't handle any +REST endpoints itself, but you should set start_pushers: False in the +shared configuration file to stop the main synapse sending push notifications.

+

To run multiple instances at once the pusher_instances option should list all +pusher instances by their worker name, e.g.:

+
pusher_instances:
+    - pusher_worker1
+    - pusher_worker2
+
+

synapse.app.appservice

+

Handles sending output traffic to Application Services. Doesn't handle any +REST endpoints itself, but you should set notify_appservices: False in the +shared configuration file to stop the main synapse sending appservice notifications.

+

Note this worker cannot be load-balanced: only one instance should be active.

+

synapse.app.federation_sender

+

Handles sending federation traffic to other servers. Doesn't handle any +REST endpoints itself, but you should set send_federation: False in the +shared configuration file to stop the main synapse sending this traffic.

+

If running multiple federation senders then you must list each +instance in the federation_sender_instances option by their worker_name. +All instances must be stopped and started when adding or removing instances. +For example:

+
federation_sender_instances:
+    - federation_sender1
+    - federation_sender2
+
+

synapse.app.media_repository

+

Handles the media repository. It can handle all endpoints starting with:

+
/_matrix/media/
+
+

... and the following regular expressions matching media-specific administration APIs:

+
^/_synapse/admin/v1/purge_media_cache$
+^/_synapse/admin/v1/room/.*/media.*$
+^/_synapse/admin/v1/user/.*/media.*$
+^/_synapse/admin/v1/media/.*$
+^/_synapse/admin/v1/quarantine_media/.*$
+
+

You should also set enable_media_repo: False in the shared configuration +file to stop the main synapse running background jobs related to managing the +media repository.

+

In the media_repository worker configuration file, configure the http listener to +expose the media resource. For example:

+
    worker_listeners:
+     - type: http
+       port: 8085
+       resources:
+         - names:
+           - media
+
+

Note that if running multiple media repositories they must be on the same server +and you must configure a single instance to run the background tasks, e.g.:

+
    media_instance_running_background_jobs: "media-repository-1"
+
+

Note that if a reverse proxy is used , then /_matrix/media/ must be routed for both inbound client and federation requests (if they are handled separately).

+

synapse.app.user_dir

+

Handles searches in the user directory. It can handle REST endpoints matching +the following regular expressions:

+
^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
+
+

When using this worker you must also set update_user_directory: False in the +shared configuration file to stop the main synapse running background +jobs related to updating the user directory.

+

synapse.app.frontend_proxy

+

Proxies some frequently-requested client endpoints to add caching and remove +load from the main synapse. It can handle REST endpoints matching the following +regular expressions:

+
^/_matrix/client/(api/v1|r0|unstable)/keys/upload
+
+

If use_presence is False in the homeserver config, it can also handle REST +endpoints matching the following regular expressions:

+
^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
+
+

This "stub" presence handler will pass through GET request but make the +PUT effectively a no-op.

+

It will proxy any requests it cannot handle to the main synapse instance. It +must therefore be configured with the location of the main instance, via +the worker_main_http_uri setting in the frontend_proxy worker configuration +file. For example:

+
worker_main_http_uri: http://127.0.0.1:8008
+
+

Historical apps

+

Note: Historically there used to be more apps, however they have been +amalgamated into a single synapse.app.generic_worker app. The remaining apps +are ones that do specific processing unrelated to requests, e.g. the pusher +that handles sending out push notifications for new events. The intention is for +all these to be folded into the generic_worker app and to use config to define +which processes handle the various proccessing such as push notifications.

+

Migration from old config

+

There are two main independent changes that have been made: introducing Redis +support and merging apps into synapse.app.generic_worker. Both these changes +are backwards compatible and so no changes to the config are required, however +server admins are encouraged to plan to migrate to Redis as the old style direct +TCP replication config is deprecated.

+

To migrate to Redis add the redis config as above, and optionally remove the +TCP replication listener from master and worker_replication_port from worker +config.

+

To migrate apps to use synapse.app.generic_worker simply update the +worker_app option in the worker configs, and where worker are started (e.g. +in systemd service files, but not required for synctl).

+

Architectural diagram

+

The following shows an example setup using Redis and a reverse proxy:

+
                     Clients & Federation
+                              |
+                              v
+                        +-----------+
+                        |           |
+                        |  Reverse  |
+                        |  Proxy    |
+                        |           |
+                        +-----------+
+                            | | |
+                            | | | HTTP requests
+        +-------------------+ | +-----------+
+        |                 +---+             |
+        |                 |                 |
+        v                 v                 v
++--------------+  +--------------+  +--------------+  +--------------+
+|   Main       |  |   Generic    |  |   Generic    |  |  Event       |
+|   Process    |  |   Worker 1   |  |   Worker 2   |  |  Persister   |
++--------------+  +--------------+  +--------------+  +--------------+
+      ^    ^          |   ^   |         |   ^   |          ^    ^
+      |    |          |   |   |         |   |   |          |    |
+      |    |          |   |   |  HTTP   |   |   |          |    |
+      |    +----------+<--|---|---------+   |   |          |    |
+      |                   |   +-------------|-->+----------+    |
+      |                   |                 |                   |
+      |                   |                 |                   |
+      v                   v                 v                   v
+====================================================================
+                                                         Redis pub/sub channel
+
+

Using synctl with workers

+

If you want to use synctl to manage your synapse processes, you will need to +create an an additional configuration file for the main synapse process. That +configuration should look like this:

+
worker_app: synapse.app.homeserver
+
+

Additionally, each worker app must be configured with the name of a "pid file", +to which it will write its process ID when it starts. For example, for a +synchrotron, you might write:

+
worker_pid_file: /home/matrix/synapse/worker1.pid
+
+

Finally, to actually run your worker-based synapse, you must pass synctl the -a +commandline option to tell it to operate on all the worker configurations found +in the given directory, e.g.:

+
synctl -a $CONFIG/workers start
+
+

Currently one should always restart all workers when restarting or upgrading +synapse, unless you explicitly know it's safe not to. For instance, restarting +synapse without restarting all the synchrotrons may result in broken typing +notifications.

+

To manipulate a specific worker, you pass the -w option to synctl:

+
synctl -w $CONFIG/workers/worker1.yaml restart
+
+

Setting up Synapse with Workers and Systemd

+

This is a setup for managing synapse with systemd, including support for +managing workers. It provides a matrix-synapse service for the master, as +well as a matrix-synapse-worker@ service template for any workers you +require. Additionally, to group the required services, it sets up a +matrix-synapse.target.

+

See the folder system for the systemd unit files.

+

The folder workers contains an example configuration for the +federation_reader worker.

+

Synapse configuration files

+

See workers.md for information on how to set up the +configuration files and reverse-proxy correctly. You can find an example worker +config in the workers folder.

+

Systemd manages daemonization itself, so ensure that none of the configuration +files set either daemonize or worker_daemonize.

+

The config files of all workers are expected to be located in +/etc/matrix-synapse/workers. If you want to use a different location, edit +the provided *.service files accordingly.

+

There is no need for a separate configuration file for the master process.

+

Set up

+
    +
  1. Adjust synapse configuration files as above.
  2. +
  3. Copy the *.service and *.target files in system to +/etc/systemd/system.
  4. +
  5. Run systemctl daemon-reload to tell systemd to load the new unit files.
  6. +
  7. Run systemctl enable matrix-synapse.service. This will configure the +synapse master process to be started as part of the matrix-synapse.target +target.
  8. +
  9. For each worker process to be enabled, run systemctl enable matrix-synapse-worker@<worker_name>.service. For each <worker_name>, there +should be a corresponding configuration file. +/etc/matrix-synapse/workers/<worker_name>.yaml.
  10. +
  11. Start all the synapse processes with systemctl start matrix-synapse.target.
  12. +
  13. Tell systemd to start synapse on boot with systemctl enable matrix-synapse.target.
  14. +
+

Usage

+

Once the services are correctly set up, you can use the following commands +to manage your synapse installation:

+
# Restart Synapse master and all workers
+systemctl restart matrix-synapse.target
+
+# Stop Synapse and all workers
+systemctl stop matrix-synapse.target
+
+# Restart the master alone
+systemctl start matrix-synapse.service
+
+# Restart a specific worker (eg. federation_reader); the master is
+# unaffected by this.
+systemctl restart matrix-synapse-worker@federation_reader.service
+
+# Add a new worker (assuming all configs are set up already)
+systemctl enable matrix-synapse-worker@federation_writer.service
+systemctl restart matrix-synapse.target
+
+

Hardening

+

Optional: If further hardening is desired, the file +override-hardened.conf may be copied from +contrib/systemd/override-hardened.conf in this repository to the location +/etc/systemd/system/matrix-synapse.service.d/override-hardened.conf (the +directory may have to be created). It enables certain sandboxing features in +systemd to further secure the synapse service. You may read the comments to +understand what the override file is doing. The same file will need to be copied +to +/etc/systemd/system/matrix-synapse-worker@.service.d/override-hardened-worker.conf +(this directory may also have to be created) in order to apply the same +hardening options to any worker processes.

+

Once these files have been copied to their appropriate locations, simply reload +systemd's manager config files and restart all Synapse services to apply the hardening options. They will automatically +be applied at every restart as long as the override files are present at the +specified locations.

+
systemctl daemon-reload
+
+# Restart services
+systemctl restart matrix-synapse.target
+
+

In order to see their effect, you may run systemd-analyze security matrix-synapse.service before and after applying the hardening options to see +the changes being applied at a glance.

+

Administration

+

This section contains information on managing your Synapse homeserver. This includes:

+
    +
  • Managing users, rooms and media via the Admin API.
  • +
  • Setting up metrics and monitoring to give you insight into your homeserver's health.
  • +
  • Configuring structured logging.
  • +
+

The Admin API

+

Authenticate as a server admin

+

Many of the API calls in the admin api will require an access_token for a +server admin. (Note that a server admin is distinct from a room admin.)

+

A user can be marked as a server admin by updating the database directly, e.g.:

+
UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
+
+

A new server admin user can also be created using the register_new_matrix_user +command. This is a script that is located in the scripts/ directory, or possibly +already on your $PATH depending on how Synapse was installed.

+

Finding your user's access_token is client-dependent, but will usually be shown in the client's settings.

+

Making an Admin API request

+

Once you have your access_token, you will need to authenticate each request to an Admin API endpoint by +providing the token as either a query parameter or a request header. To add it as a request header in cURL:

+
curl --header "Authorization: Bearer <access_token>" <the_rest_of_your_API_request>
+
+

For more details on access tokens in Matrix, please refer to the complete +matrix spec documentation.

+

Account validity API

+

This API allows a server administrator to manage the validity of an account. To +use it, you must enable the account validity feature (under +account_validity) in Synapse's configuration.

+

Renew account

+

This API extends the validity of an account by as much time as configured in the +period parameter from the account_validity configuration.

+

The API is:

+
POST /_synapse/admin/v1/account_validity/validity
+
+

with the following body:

+
{
+    "user_id": "<user ID for the account to renew>",
+    "expiration_ts": 0,
+    "enable_renewal_emails": true
+}
+
+

expiration_ts is an optional parameter and overrides the expiration date, +which otherwise defaults to now + validity period.

+

enable_renewal_emails is also an optional parameter and enables/disables +sending renewal emails to the user. Defaults to true.

+

The API returns with the new expiration date for this account, as a timestamp in +milliseconds since epoch:

+
{
+    "expiration_ts": 0
+}
+
+

Delete a local group

+

This API lets a server admin delete a local group. Doing so will kick all +users out of the group so that their clients will correctly handle the group +being deleted.

+

The API is:

+
POST /_synapse/admin/v1/delete_group/<group_id>
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: see Admin API.

+

Show reported events

+

This API returns information about reported events.

+

The api is:

+
GET /_synapse/admin/v1/event_reports?from=0&limit=10
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: see Admin API.

+

It returns a JSON body like the following:

+
{
+    "event_reports": [
+        {
+            "event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
+            "id": 2,
+            "reason": "foo",
+            "score": -100,
+            "received_ts": 1570897107409,
+            "canonical_alias": "#alias1:matrix.org",
+            "room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
+            "name": "Matrix HQ",
+            "sender": "@foobar:matrix.org",
+            "user_id": "@foo:matrix.org"
+        },
+        {
+            "event_id": "$3IcdZsDaN_En-S1DF4EMCy3v4gNRKeOJs8W5qTOKj4I",
+            "id": 3,
+            "reason": "bar",
+            "score": -100,
+            "received_ts": 1598889612059,
+            "canonical_alias": "#alias2:matrix.org",
+            "room_id": "!eGvUQuTCkHGVwNMOjv:matrix.org",
+            "name": "Your room name here",
+            "sender": "@foobar:matrix.org",
+            "user_id": "@bar:matrix.org"
+        }
+    ],
+    "next_token": 2,
+    "total": 4
+}
+
+

To paginate, check for next_token and if present, call the endpoint again with from +set to the value of next_token. This will return a new page.

+

If the endpoint does not return a next_token then there are no more reports to +paginate through.

+

URL parameters:

+
    +
  • limit: integer - Is optional but is used for pagination, denoting the maximum number +of items to return in this call. Defaults to 100.
  • +
  • from: integer - Is optional but used for pagination, denoting the offset in the +returned results. This should be treated as an opaque value and not explicitly set to +anything other than the return value of next_token from a previous call. Defaults to 0.
  • +
  • dir: string - Direction of event report order. Whether to fetch the most recent +first (b) or the oldest first (f). Defaults to b.
  • +
  • user_id: string - Is optional and filters to only return users with user IDs that +contain this value. This is the user who reported the event and wrote the reason.
  • +
  • room_id: string - Is optional and filters to only return rooms with room IDs that +contain this value.
  • +
+

Response

+

The following fields are returned in the JSON response body:

+
    +
  • id: integer - ID of event report.
  • +
  • received_ts: integer - The timestamp (in milliseconds since the unix epoch) when this +report was sent.
  • +
  • room_id: string - The ID of the room in which the event being reported is located.
  • +
  • name: string - The name of the room.
  • +
  • event_id: string - The ID of the reported event.
  • +
  • user_id: string - This is the user who reported the event and wrote the reason.
  • +
  • reason: string - Comment made by the user_id in this report. May be blank or null.
  • +
  • score: integer - Content is reported based upon a negative score, where -100 is +"most offensive" and 0 is "inoffensive". May be null.
  • +
  • sender: string - This is the ID of the user who sent the original message/event that +was reported.
  • +
  • canonical_alias: string - The canonical alias of the room. null if the room does not +have a canonical alias set.
  • +
  • next_token: integer - Indication for pagination. See above.
  • +
  • total: integer - Total number of event reports related to the query +(user_id and room_id).
  • +
+

Show details of a specific event report

+

This API returns information about a specific event report.

+

The api is:

+
GET /_synapse/admin/v1/event_reports/<report_id>
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: see Admin API.

+

It returns a JSON body like the following:

+
{
+    "event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
+    "event_json": {
+        "auth_events": [
+            "$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M",
+            "$oggsNXxzPFRE3y53SUNd7nsj69-QzKv03a1RucHu-ws"
+        ],
+        "content": {
+            "body": "matrix.org: This Week in Matrix",
+            "format": "org.matrix.custom.html",
+            "formatted_body": "<strong>matrix.org</strong>:<br><a href=\"https://matrix.org/blog/\"><strong>This Week in Matrix</strong></a>",
+            "msgtype": "m.notice"
+        },
+        "depth": 546,
+        "hashes": {
+            "sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
+        },
+        "origin": "matrix.org",
+        "origin_server_ts": 1592291711430,
+        "prev_events": [
+            "$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"
+        ],
+        "prev_state": [],
+        "room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
+        "sender": "@foobar:matrix.org",
+        "signatures": {
+            "matrix.org": {
+                "ed25519:a_JaEG": "cs+OUKW/iHx5pEidbWxh0UiNNHwe46Ai9LwNz+Ah16aWDNszVIe2gaAcVZfvNsBhakQTew51tlKmL2kspXk/Dg"
+            }
+        },
+        "type": "m.room.message",
+        "unsigned": {
+            "age_ts": 1592291711430,
+        }
+    },
+    "id": <report_id>,
+    "reason": "foo",
+    "score": -100,
+    "received_ts": 1570897107409,
+    "canonical_alias": "#alias1:matrix.org",
+    "room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
+    "name": "Matrix HQ",
+    "sender": "@foobar:matrix.org",
+    "user_id": "@foo:matrix.org"
+}
+
+

URL parameters:

+
    +
  • report_id: string - The ID of the event report.
  • +
+

Response

+

The following fields are returned in the JSON response body:

+
    +
  • id: integer - ID of event report.
  • +
  • received_ts: integer - The timestamp (in milliseconds since the unix epoch) when this +report was sent.
  • +
  • room_id: string - The ID of the room in which the event being reported is located.
  • +
  • name: string - The name of the room.
  • +
  • event_id: string - The ID of the reported event.
  • +
  • user_id: string - This is the user who reported the event and wrote the reason.
  • +
  • reason: string - Comment made by the user_id in this report. May be blank.
  • +
  • score: integer - Content is reported based upon a negative score, where -100 is +"most offensive" and 0 is "inoffensive".
  • +
  • sender: string - This is the ID of the user who sent the original message/event that +was reported.
  • +
  • canonical_alias: string - The canonical alias of the room. null if the room does not +have a canonical alias set.
  • +
  • event_json: object - Details of the original event that was reported.
  • +
+

Contents

+ +

Querying media

+

These APIs allow extracting media information from the homeserver.

+

List all media in a room

+

This API gets a list of known media in a room. +However, it only shows media from unencrypted events or rooms.

+

The API is:

+
GET /_synapse/admin/v1/room/<room_id>/media
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: see Admin API.

+

The API returns a JSON body like the following:

+
{
+  "local": [
+    "mxc://localhost/xwvutsrqponmlkjihgfedcba",
+    "mxc://localhost/abcdefghijklmnopqrstuvwx"
+  ],
+  "remote": [
+    "mxc://matrix.org/xwvutsrqponmlkjihgfedcba",
+    "mxc://matrix.org/abcdefghijklmnopqrstuvwx"
+  ]
+}
+
+

List all media uploaded by a user

+

Listing all media that has been uploaded by a local user can be achieved through +the use of the List media of a user +Admin API.

+

Quarantine media

+

Quarantining media means that it is marked as inaccessible by users. It applies +to any local media, and any locally-cached copies of remote media.

+

The media file itself (and any thumbnails) is not deleted from the server.

+

Quarantining media by ID

+

This API quarantines a single piece of local or remote media.

+

Request:

+
POST /_synapse/admin/v1/media/quarantine/<server_name>/<media_id>
+
+{}
+
+

Where server_name is in the form of example.org, and media_id is in the +form of abcdefg12345....

+

Response:

+
{}
+
+

Remove media from quarantine by ID

+

This API removes a single piece of local or remote media from quarantine.

+

Request:

+
POST /_synapse/admin/v1/media/unquarantine/<server_name>/<media_id>
+
+{}
+
+

Where server_name is in the form of example.org, and media_id is in the +form of abcdefg12345....

+

Response:

+
{}
+
+

Quarantining media in a room

+

This API quarantines all local and remote media in a room.

+

Request:

+
POST /_synapse/admin/v1/room/<room_id>/media/quarantine
+
+{}
+
+

Where room_id is in the form of !roomid12345:example.org.

+

Response:

+
{
+  "num_quarantined": 10
+}
+
+

The following fields are returned in the JSON response body:

+
    +
  • num_quarantined: integer - The number of media items successfully quarantined
  • +
+

Note that there is a legacy endpoint, POST /_synapse/admin/v1/quarantine_media/<room_id>, that operates the same. +However, it is deprecated and may be removed in a future release.

+

Quarantining all media of a user

+

This API quarantines all local media that a local user has uploaded. That is to say, if +you would like to quarantine media uploaded by a user on a remote homeserver, you should +instead use one of the other APIs.

+

Request:

+
POST /_synapse/admin/v1/user/<user_id>/media/quarantine
+
+{}
+
+

URL Parameters

+
    +
  • user_id: string - User ID in the form of @bob:example.org
  • +
+

Response:

+
{
+  "num_quarantined": 10
+}
+
+

The following fields are returned in the JSON response body:

+
    +
  • num_quarantined: integer - The number of media items successfully quarantined
  • +
+

Protecting media from being quarantined

+

This API protects a single piece of local media from being quarantined using the +above APIs. This is useful for sticker packs and other shared media which you do +not want to get quarantined, especially when +quarantining media in a room.

+

Request:

+
POST /_synapse/admin/v1/media/protect/<media_id>
+
+{}
+
+

Where media_id is in the form of abcdefg12345....

+

Response:

+
{}
+
+

Unprotecting media from being quarantined

+

This API reverts the protection of a media.

+

Request:

+
POST /_synapse/admin/v1/media/unprotect/<media_id>
+
+{}
+
+

Where media_id is in the form of abcdefg12345....

+

Response:

+
{}
+
+

Delete local media

+

This API deletes the local media from the disk of your own server. +This includes any local thumbnails and copies of media downloaded from +remote homeservers. +This API will not affect media that has been uploaded to external +media repositories (e.g https://github.com/turt2live/matrix-media-repo/). +See also Purge Remote Media API.

+

Delete a specific local media

+

Delete a specific media_id.

+

Request:

+
DELETE /_synapse/admin/v1/media/<server_name>/<media_id>
+
+{}
+
+

URL Parameters

+
    +
  • server_name: string - The name of your local server (e.g matrix.org)
  • +
  • media_id: string - The ID of the media (e.g abcdefghijklmnopqrstuvwx)
  • +
+

Response:

+
{
+  "deleted_media": [
+    "abcdefghijklmnopqrstuvwx"
+  ],
+  "total": 1
+}
+
+

The following fields are returned in the JSON response body:

+
    +
  • deleted_media: an array of strings - List of deleted media_id
  • +
  • total: integer - Total number of deleted media_id
  • +
+

Delete local media by date or size

+

Request:

+
POST /_synapse/admin/v1/media/<server_name>/delete?before_ts=<before_ts>
+
+{}
+
+

URL Parameters

+
    +
  • server_name: string - The name of your local server (e.g matrix.org).
  • +
  • before_ts: string representing a positive integer - Unix timestamp in ms. +Files that were last used before this timestamp will be deleted. It is the timestamp of +last access and not the timestamp creation.
  • +
  • size_gt: Optional - string representing a positive integer - Size of the media in bytes. +Files that are larger will be deleted. Defaults to 0.
  • +
  • keep_profiles: Optional - string representing a boolean - Switch to also delete files +that are still used in image data (e.g user profile, room avatar). +If false these files will be deleted. Defaults to true.
  • +
+

Response:

+
{
+  "deleted_media": [
+    "abcdefghijklmnopqrstuvwx",
+    "abcdefghijklmnopqrstuvwz"
+  ],
+  "total": 2
+}
+
+

The following fields are returned in the JSON response body:

+
    +
  • deleted_media: an array of strings - List of deleted media_id
  • +
  • total: integer - Total number of deleted media_id
  • +
+

Purge Remote Media API

+

The purge remote media API allows server admins to purge old cached remote media.

+

The API is:

+
POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>
+
+{}
+
+

URL Parameters

+
    +
  • unix_timestamp_in_ms: string representing a positive integer - Unix timestamp in ms. +All cached media that was last accessed before this timestamp will be removed.
  • +
+

Response:

+
{
+  "deleted": 10
+}
+
+

The following fields are returned in the JSON response body:

+
    +
  • deleted: integer - The number of media items successfully deleted
  • +
+

To use it, you will need to authenticate by providing an access_token for a +server admin: see Admin API.

+

If the user re-requests purged remote media, synapse will re-request the media +from the originating server.

+

Purge History API

+

The purge history API allows server admins to purge historic events from their +database, reclaiming disk space.

+

Depending on the amount of history being purged a call to the API may take +several minutes or longer. During this period users will not be able to +paginate further back in the room from the point being purged from.

+

Note that Synapse requires at least one message in each room, so it will never +delete the last message in a room.

+

The API is:

+
POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>]
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

By default, events sent by local users are not deleted, as they may represent +the only copies of this content in existence. (Events sent by remote users are +deleted.)

+

Room state data (such as joins, leaves, topic) is always preserved.

+

To delete local message events as well, set delete_local_events in the body:

+
{
+   "delete_local_events": true
+}
+
+

The caller must specify the point in the room to purge up to. This can be +specified by including an event_id in the URI, or by setting a +purge_up_to_event_id or purge_up_to_ts in the request body. If an event +id is given, that event (and others at the same graph depth) will be retained. +If purge_up_to_ts is given, it should be a timestamp since the unix epoch, +in milliseconds.

+

The API starts the purge running, and returns immediately with a JSON body with +a purge id:

+
{
+    "purge_id": "<opaque id>"
+}
+
+

Purge status query

+

It is possible to poll for updates on recent purges with a second API;

+
GET /_synapse/admin/v1/purge_history_status/<purge_id>
+
+

Again, you will need to authenticate by providing an access_token for a +server admin.

+

This API returns a JSON body like the following:

+
{
+    "status": "active"
+}
+
+

The status will be one of active, complete, or failed.

+

Reclaim disk space (Postgres)

+

To reclaim the disk space and return it to the operating system, you need to run +VACUUM FULL; on the database.

+

https://www.postgresql.org/docs/current/sql-vacuum.html

+

Deprecated: Purge room API

+

The old Purge room API is deprecated and will be removed in a future release. +See the new Delete Room API for more details.

+

This API will remove all trace of a room from your database.

+

All local users must have left the room before it can be removed.

+

The API is:

+
POST /_synapse/admin/v1/purge_room
+
+{
+    "room_id": "!room:id"
+}
+
+

You must authenticate using the access token of an admin user.

+

Shared-Secret Registration

+

This API allows for the creation of users in an administrative and +non-interactive way. This is generally used for bootstrapping a Synapse +instance with administrator accounts.

+

To authenticate yourself to the server, you will need both the shared secret +(registration_shared_secret in the homeserver configuration), and a +one-time nonce. If the registration shared secret is not configured, this API +is not enabled.

+

To fetch the nonce, you need to request one from the API:

+
> GET /_synapse/admin/v1/register
+
+< {"nonce": "thisisanonce"}
+
+

Once you have the nonce, you can make a POST to the same URL with a JSON +body containing the nonce, username, password, whether they are an admin +(optional, False by default), and a HMAC digest of the content. Also you can +set the displayname (optional, username by default).

+

As an example:

+
> POST /_synapse/admin/v1/register
+> {
+   "nonce": "thisisanonce",
+   "username": "pepper_roni",
+   "displayname": "Pepper Roni",
+   "password": "pizza",
+   "admin": true,
+   "mac": "mac_digest_here"
+  }
+
+< {
+   "access_token": "token_here",
+   "user_id": "@pepper_roni:localhost",
+   "home_server": "test",
+   "device_id": "device_id_here"
+  }
+
+

The MAC is the hex digest output of the HMAC-SHA1 algorithm, with the key being +the shared secret and the content being the nonce, user, password, either the +string "admin" or "notadmin", and optionally the user_type +each separated by NULs. For an example of generation in Python:

+
import hmac, hashlib
+
+def generate_mac(nonce, user, password, admin=False, user_type=None):
+
+    mac = hmac.new(
+      key=shared_secret,
+      digestmod=hashlib.sha1,
+    )
+
+    mac.update(nonce.encode('utf8'))
+    mac.update(b"\x00")
+    mac.update(user.encode('utf8'))
+    mac.update(b"\x00")
+    mac.update(password.encode('utf8'))
+    mac.update(b"\x00")
+    mac.update(b"admin" if admin else b"notadmin")
+    if user_type:
+        mac.update(b"\x00")
+        mac.update(user_type.encode('utf8'))
+
+    return mac.hexdigest()
+
+

Edit Room Membership API

+

This API allows an administrator to join an user account with a given user_id +to a room with a given room_id_or_alias. You can only modify the membership of +local users. The server administrator must be in the room and have permission to +invite users.

+

Parameters

+

The following parameters are available:

+
    +
  • user_id - Fully qualified user: for example, @user:server.com.
  • +
  • room_id_or_alias - The room identifier or alias to join: for example, +!636q39766251:server.com.
  • +
+

Usage

+
POST /_synapse/admin/v1/join/<room_id_or_alias>
+
+{
+  "user_id": "@user:server.com"
+}
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: see Admin API.

+

Response:

+
{
+  "room_id": "!636q39766251:server.com"
+}
+
+

Contents

+ +

List Room API

+

The List Room admin API allows server admins to get a list of rooms on their +server. There are various parameters available that allow for filtering and +sorting the returned list. This API supports pagination.

+

Parameters

+

The following query parameters are available:

+
    +
  • from - Offset in the returned list. Defaults to 0.
  • +
  • limit - Maximum amount of rooms to return. Defaults to 100.
  • +
  • order_by - The method in which to sort the returned list of rooms. Valid values are: +
      +
    • alphabetical - Same as name. This is deprecated.
    • +
    • size - Same as joined_members. This is deprecated.
    • +
    • name - Rooms are ordered alphabetically by room name. This is the default.
    • +
    • canonical_alias - Rooms are ordered alphabetically by main alias address of the room.
    • +
    • joined_members - Rooms are ordered by the number of members. Largest to smallest.
    • +
    • joined_local_members - Rooms are ordered by the number of local members. Largest to smallest.
    • +
    • version - Rooms are ordered by room version. Largest to smallest.
    • +
    • creator - Rooms are ordered alphabetically by creator of the room.
    • +
    • encryption - Rooms are ordered alphabetically by the end-to-end encryption algorithm.
    • +
    • federatable - Rooms are ordered by whether the room is federatable.
    • +
    • public - Rooms are ordered by visibility in room list.
    • +
    • join_rules - Rooms are ordered alphabetically by join rules of the room.
    • +
    • guest_access - Rooms are ordered alphabetically by guest access option of the room.
    • +
    • history_visibility - Rooms are ordered alphabetically by visibility of history of the room.
    • +
    • state_events - Rooms are ordered by number of state events. Largest to smallest.
    • +
    +
  • +
  • dir - Direction of room order. Either f for forwards or b for backwards. Setting +this value to b will reverse the above sort order. Defaults to f.
  • +
  • search_term - Filter rooms by their room name. Search term can be contained in any +part of the room name. Defaults to no filtering.
  • +
+

The following fields are possible in the JSON response body:

+
    +
  • rooms - An array of objects, each containing information about a room. +
      +
    • Room objects contain the following fields: +
        +
      • room_id - The ID of the room.
      • +
      • name - The name of the room.
      • +
      • canonical_alias - The canonical (main) alias address of the room.
      • +
      • joined_members - How many users are currently in the room.
      • +
      • joined_local_members - How many local users are currently in the room.
      • +
      • version - The version of the room as a string.
      • +
      • creator - The user_id of the room creator.
      • +
      • encryption - Algorithm of end-to-end encryption of messages. Is null if encryption is not active.
      • +
      • federatable - Whether users on other servers can join this room.
      • +
      • public - Whether the room is visible in room directory.
      • +
      • join_rules - The type of rules used for users wishing to join this room. One of: ["public", "knock", "invite", "private"].
      • +
      • guest_access - Whether guests can join the room. One of: ["can_join", "forbidden"].
      • +
      • history_visibility - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].
      • +
      • state_events - Total number of state_events of a room. Complexity of the room.
      • +
      +
    • +
    +
  • +
  • offset - The current pagination offset in rooms. This parameter should be +used instead of next_token for room offset as next_token is +not intended to be parsed.
  • +
  • total_rooms - The total number of rooms this query can return. Using this +and offset, you have enough information to know the current +progression through the list.
  • +
  • next_batch - If this field is present, we know that there are potentially +more rooms on the server that did not all fit into this response. +We can use next_batch to get the "next page" of results. To do +so, simply repeat your request, setting the from parameter to +the value of next_batch.
  • +
  • prev_batch - If this field is present, it is possible to paginate backwards. +Use prev_batch for the from value in the next request to +get the "previous page" of results.
  • +
+

Usage

+

A standard request with no filtering:

+
GET /_synapse/admin/v1/rooms
+
+{}
+
+

Response:

+
{
+  "rooms": [
+    {
+      "room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
+      "name": "Matrix HQ",
+      "canonical_alias": "#matrix:matrix.org",
+      "joined_members": 8326,
+      "joined_local_members": 2,
+      "version": "1",
+      "creator": "@foo:matrix.org",
+      "encryption": null,
+      "federatable": true,
+      "public": true,
+      "join_rules": "invite",
+      "guest_access": null,
+      "history_visibility": "shared",
+      "state_events": 93534
+    },
+    ... (8 hidden items) ...
+    {
+      "room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
+      "name": "This Week In Matrix (TWIM)",
+      "canonical_alias": "#twim:matrix.org",
+      "joined_members": 314,
+      "joined_local_members": 20,
+      "version": "4",
+      "creator": "@foo:matrix.org",
+      "encryption": "m.megolm.v1.aes-sha2",
+      "federatable": true,
+      "public": false,
+      "join_rules": "invite",
+      "guest_access": null,
+      "history_visibility": "shared",
+      "state_events": 8345
+    }
+  ],
+  "offset": 0,
+  "total_rooms": 10
+}
+
+

Filtering by room name:

+
GET /_synapse/admin/v1/rooms?search_term=TWIM
+
+{}
+
+

Response:

+
{
+  "rooms": [
+    {
+      "room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
+      "name": "This Week In Matrix (TWIM)",
+      "canonical_alias": "#twim:matrix.org",
+      "joined_members": 314,
+      "joined_local_members": 20,
+      "version": "4",
+      "creator": "@foo:matrix.org",
+      "encryption": "m.megolm.v1.aes-sha2",
+      "federatable": true,
+      "public": false,
+      "join_rules": "invite",
+      "guest_access": null,
+      "history_visibility": "shared",
+      "state_events": 8
+    }
+  ],
+  "offset": 0,
+  "total_rooms": 1
+}
+
+

Paginating through a list of rooms:

+
GET /_synapse/admin/v1/rooms?order_by=size
+
+{}
+
+

Response:

+
{
+  "rooms": [
+    {
+      "room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
+      "name": "Matrix HQ",
+      "canonical_alias": "#matrix:matrix.org",
+      "joined_members": 8326,
+      "joined_local_members": 2,
+      "version": "1",
+      "creator": "@foo:matrix.org",
+      "encryption": null,
+      "federatable": true,
+      "public": true,
+      "join_rules": "invite",
+      "guest_access": null,
+      "history_visibility": "shared",
+      "state_events": 93534
+    },
+    ... (98 hidden items) ...
+    {
+      "room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
+      "name": "This Week In Matrix (TWIM)",
+      "canonical_alias": "#twim:matrix.org",
+      "joined_members": 314,
+      "joined_local_members": 20,
+      "version": "4",
+      "creator": "@foo:matrix.org",
+      "encryption": "m.megolm.v1.aes-sha2",
+      "federatable": true,
+      "public": false,
+      "join_rules": "invite",
+      "guest_access": null,
+      "history_visibility": "shared",
+      "state_events": 8345
+    }
+  ],
+  "offset": 0,
+  "total_rooms": 150
+  "next_token": 100
+}
+
+

The presence of the next_token parameter tells us that there are more rooms +than returned in this request, and we need to make another request to get them. +To get the next batch of room results, we repeat our request, setting the from +parameter to the value of next_token.

+
GET /_synapse/admin/v1/rooms?order_by=size&from=100
+
+{}
+
+

Response:

+
{
+  "rooms": [
+    {
+      "room_id": "!mscvqgqpHYjBGDxNym:matrix.org",
+      "name": "Music Theory",
+      "canonical_alias": "#musictheory:matrix.org",
+      "joined_members": 127,
+      "joined_local_members": 2,
+      "version": "1",
+      "creator": "@foo:matrix.org",
+      "encryption": null,
+      "federatable": true,
+      "public": true,
+      "join_rules": "invite",
+      "guest_access": null,
+      "history_visibility": "shared",
+      "state_events": 93534
+    },
+    ... (48 hidden items) ...
+    {
+      "room_id": "!twcBhHVdZlQWuuxBhN:termina.org.uk",
+      "name": "weechat-matrix",
+      "canonical_alias": "#weechat-matrix:termina.org.uk",
+      "joined_members": 137,
+      "joined_local_members": 20,
+      "version": "4",
+      "creator": "@foo:termina.org.uk",
+      "encryption": null,
+      "federatable": true,
+      "public": true,
+      "join_rules": "invite",
+      "guest_access": null,
+      "history_visibility": "shared",
+      "state_events": 8345
+    }
+  ],
+  "offset": 100,
+  "prev_batch": 0,
+  "total_rooms": 150
+}
+
+

Once the next_token parameter is no longer present, we know we've reached the +end of the list.

+

Room Details API

+

The Room Details admin API allows server admins to get all details of a room.

+

The following fields are possible in the JSON response body:

+
    +
  • room_id - The ID of the room.
  • +
  • name - The name of the room.
  • +
  • topic - The topic of the room.
  • +
  • avatar - The mxc URI to the avatar of the room.
  • +
  • canonical_alias - The canonical (main) alias address of the room.
  • +
  • joined_members - How many users are currently in the room.
  • +
  • joined_local_members - How many local users are currently in the room.
  • +
  • joined_local_devices - How many local devices are currently in the room.
  • +
  • version - The version of the room as a string.
  • +
  • creator - The user_id of the room creator.
  • +
  • encryption - Algorithm of end-to-end encryption of messages. Is null if encryption is not active.
  • +
  • federatable - Whether users on other servers can join this room.
  • +
  • public - Whether the room is visible in room directory.
  • +
  • join_rules - The type of rules used for users wishing to join this room. One of: ["public", "knock", "invite", "private"].
  • +
  • guest_access - Whether guests can join the room. One of: ["can_join", "forbidden"].
  • +
  • history_visibility - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].
  • +
  • state_events - Total number of state_events of a room. Complexity of the room.
  • +
+

Usage

+

A standard request:

+
GET /_synapse/admin/v1/rooms/<room_id>
+
+{}
+
+

Response:

+
{
+  "room_id": "!mscvqgqpHYjBGDxNym:matrix.org",
+  "name": "Music Theory",
+  "avatar": "mxc://matrix.org/AQDaVFlbkQoErdOgqWRgiGSV",
+  "topic": "Theory, Composition, Notation, Analysis",
+  "canonical_alias": "#musictheory:matrix.org",
+  "joined_members": 127,
+  "joined_local_members": 2,
+  "joined_local_devices": 2,
+  "version": "1",
+  "creator": "@foo:matrix.org",
+  "encryption": null,
+  "federatable": true,
+  "public": true,
+  "join_rules": "invite",
+  "guest_access": null,
+  "history_visibility": "shared",
+  "state_events": 93534
+}
+
+

Room Members API

+

The Room Members admin API allows server admins to get a list of all members of a room.

+

The response includes the following fields:

+
    +
  • members - A list of all the members that are present in the room, represented by their ids.
  • +
  • total - Total number of members in the room.
  • +
+

Usage

+

A standard request:

+
GET /_synapse/admin/v1/rooms/<room_id>/members
+
+{}
+
+

Response:

+
{
+  "members": [
+    "@foo:matrix.org",
+    "@bar:matrix.org",
+    "@foobar:matrix.org"
+  ],
+  "total": 3
+}
+
+

Room State API

+

The Room State admin API allows server admins to get a list of all state events in a room.

+

The response includes the following fields:

+
    +
  • state - The current state of the room at the time of request.
  • +
+

Usage

+

A standard request:

+
GET /_synapse/admin/v1/rooms/<room_id>/state
+
+{}
+
+

Response:

+
{
+  "state": [
+    {"type": "m.room.create", "state_key": "", "etc": true},
+    {"type": "m.room.power_levels", "state_key": "", "etc": true},
+    {"type": "m.room.name", "state_key": "", "etc": true}
+  ]
+}
+
+

Delete Room API

+

The Delete Room admin API allows server admins to remove rooms from server +and block these rooms.

+

Shuts down a room. Moves all local users and room aliases automatically to a +new room if new_room_user_id is set. Otherwise local users only +leave the room without any information.

+

The new room will be created with the user specified by the new_room_user_id parameter +as room administrator and will contain a message explaining what happened. Users invited +to the new room will have power level -10 by default, and thus be unable to speak.

+

If block is True it prevents new joins to the old room.

+

This API will remove all trace of the old room from your database after removing +all local users. If purge is true (the default), all traces of the old room will +be removed from your database after removing all local users. If you do not want +this to happen, set purge to false. +Depending on the amount of history being purged a call to the API may take +several minutes or longer.

+

The local server will only have the power to move local user and room aliases to +the new room. Users on other servers will be unaffected.

+

The API is:

+
DELETE /_synapse/admin/v1/rooms/<room_id>
+
+

with a body of:

+
{
+    "new_room_user_id": "@someuser:example.com",
+    "room_name": "Content Violation Notification",
+    "message": "Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service.",
+    "block": true,
+    "purge": true
+}
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: see Admin API.

+

A response body like the following is returned:

+
{
+    "kicked_users": [
+        "@foobar:example.com"
+    ],
+    "failed_to_kick_users": [],
+    "local_aliases": [
+        "#badroom:example.com",
+        "#evilsaloon:example.com"
+    ],
+    "new_room_id": "!newroomid:example.com"
+}
+
+

Parameters

+

The following parameters should be set in the URL:

+
    +
  • room_id - The ID of the room.
  • +
+

The following JSON body parameters are available:

+
    +
  • new_room_user_id - Optional. If set, a new room will be created with this user ID +as the creator and admin, and all users in the old room will be moved into that +room. If not set, no new room will be created and the users will just be removed +from the old room. The user ID must be on the local server, but does not necessarily +have to belong to a registered user.
  • +
  • room_name - Optional. A string representing the name of the room that new users will be +invited to. Defaults to Content Violation Notification
  • +
  • message - Optional. A string containing the first message that will be sent as +new_room_user_id in the new room. Ideally this will clearly convey why the +original room was shut down. Defaults to Sharing illegal content on this server is not permitted and rooms in violation will be blocked.
  • +
  • block - Optional. If set to true, this room will be added to a blocking list, preventing +future attempts to join the room. Defaults to false.
  • +
  • purge - Optional. If set to true, it will remove all traces of the room from your database. +Defaults to true.
  • +
  • force_purge - Optional, and ignored unless purge is true. If set to true, it +will force a purge to go ahead even if there are local users still in the room. Do not +use this unless a regular purge operation fails, as it could leave those users' +clients in a confused state.
  • +
+

The JSON body must not be empty. The body must be at least {}.

+

Response

+

The following fields are returned in the JSON response body:

+
    +
  • kicked_users - An array of users (user_id) that were kicked.
  • +
  • failed_to_kick_users - An array of users (user_id) that that were not kicked.
  • +
  • local_aliases - An array of strings representing the local aliases that were migrated from +the old room to the new.
  • +
  • new_room_id - A string representing the room ID of the new room.
  • +
+

Undoing room shutdowns

+

Note: This guide may be outdated by the time you read it. By nature of room shutdowns being performed at the database level, +the structure can and does change without notice.

+

First, it's important to understand that a room shutdown is very destructive. Undoing a shutdown is not as simple as pretending it +never happened - work has to be done to move forward instead of resetting the past. In fact, in some cases it might not be possible +to recover at all:

+
    +
  • If the room was invite-only, your users will need to be re-invited.
  • +
  • If the room no longer has any members at all, it'll be impossible to rejoin.
  • +
  • The first user to rejoin will have to do so via an alias on a different server.
  • +
+

With all that being said, if you still want to try and recover the room:

+
    +
  1. For safety reasons, shut down Synapse.
  2. +
  3. In the database, run DELETE FROM blocked_rooms WHERE room_id = '!example:example.org'; +
      +
    • For caution: it's recommended to run this in a transaction: BEGIN; DELETE ...;, verify you got 1 result, then COMMIT;.
    • +
    • The room ID is the same one supplied to the shutdown room API, not the Content Violation room.
    • +
    +
  4. +
  5. Restart Synapse.
  6. +
+

You will have to manually handle, if you so choose, the following:

+
    +
  • Aliases that would have been redirected to the Content Violation room.
  • +
  • Users that would have been booted from the room (and will have been force-joined to the Content Violation room).
  • +
  • Removal of the Content Violation room if desired.
  • +
+

Deprecated endpoint

+

The previous deprecated API will be removed in a future release, it was:

+
POST /_synapse/admin/v1/rooms/<room_id>/delete
+
+

It behaves the same way than the current endpoint except the path and the method.

+

Make Room Admin API

+

Grants another user the highest power available to a local user who is in the room. +If the user is not in the room, and it is not publicly joinable, then invite the user.

+

By default the server admin (the caller) is granted power, but another user can +optionally be specified, e.g.:

+
    POST /_synapse/admin/v1/rooms/<room_id_or_alias>/make_room_admin
+    {
+        "user_id": "@foo:example.com"
+    }
+
+

Forward Extremities Admin API

+

Enables querying and deleting forward extremities from rooms. When a lot of forward +extremities accumulate in a room, performance can become degraded. For details, see +#1760.

+

Check for forward extremities

+

To check the status of forward extremities for a room:

+
    GET /_synapse/admin/v1/rooms/<room_id_or_alias>/forward_extremities
+
+

A response as follows will be returned:

+
{
+  "count": 1,
+  "results": [
+    {
+      "event_id": "$M5SP266vsnxctfwFgFLNceaCo3ujhRtg_NiiHabcdefgh",
+      "state_group": 439,
+      "depth": 123,
+      "received_ts": 1611263016761
+    }
+  ]
+}    
+
+

Deleting forward extremities

+

WARNING: Please ensure you know what you're doing and have read +the related issue #1760. +Under no situations should this API be executed as an automated maintenance task!

+

If a room has lots of forward extremities, the extra can be +deleted as follows:

+
    DELETE /_synapse/admin/v1/rooms/<room_id_or_alias>/forward_extremities
+
+

A response as follows will be returned, indicating the amount of forward extremities +that were deleted.

+
{
+  "deleted": 1
+}
+
+

Event Context API

+

This API lets a client find the context of an event. This is designed primarily to investigate abuse reports.

+
GET /_synapse/admin/v1/rooms/<room_id>/context/<event_id>
+
+

This API mimmicks GET /_matrix/client/r0/rooms/{roomId}/context/{eventId}. Please refer to the link for all details on parameters and reseponse.

+

Example response:

+
{
+  "end": "t29-57_2_0_2",
+  "events_after": [
+    {
+      "content": {
+        "body": "This is an example text message",
+        "msgtype": "m.text",
+        "format": "org.matrix.custom.html",
+        "formatted_body": "<b>This is an example text message</b>"
+      },
+      "type": "m.room.message",
+      "event_id": "$143273582443PhrSn:example.org",
+      "room_id": "!636q39766251:example.com",
+      "sender": "@example:example.org",
+      "origin_server_ts": 1432735824653,
+      "unsigned": {
+        "age": 1234
+      }
+    }
+  ],
+  "event": {
+    "content": {
+      "body": "filename.jpg",
+      "info": {
+        "h": 398,
+        "w": 394,
+        "mimetype": "image/jpeg",
+        "size": 31037
+      },
+      "url": "mxc://example.org/JWEIFJgwEIhweiWJE",
+      "msgtype": "m.image"
+    },
+    "type": "m.room.message",
+    "event_id": "$f3h4d129462ha:example.com",
+    "room_id": "!636q39766251:example.com",
+    "sender": "@example:example.org",
+    "origin_server_ts": 1432735824653,
+    "unsigned": {
+      "age": 1234
+    }
+  },
+  "events_before": [
+    {
+      "content": {
+        "body": "something-important.doc",
+        "filename": "something-important.doc",
+        "info": {
+          "mimetype": "application/msword",
+          "size": 46144
+        },
+        "msgtype": "m.file",
+        "url": "mxc://example.org/FHyPlCeYUSFFxlgbQYZmoEoe"
+      },
+      "type": "m.room.message",
+      "event_id": "$143273582443PhrSn:example.org",
+      "room_id": "!636q39766251:example.com",
+      "sender": "@example:example.org",
+      "origin_server_ts": 1432735824653,
+      "unsigned": {
+        "age": 1234
+      }
+    }
+  ],
+  "start": "t27-54_2_0_2",
+  "state": [
+    {
+      "content": {
+        "creator": "@example:example.org",
+        "room_version": "1",
+        "m.federate": true,
+        "predecessor": {
+          "event_id": "$something:example.org",
+          "room_id": "!oldroom:example.org"
+        }
+      },
+      "type": "m.room.create",
+      "event_id": "$143273582443PhrSn:example.org",
+      "room_id": "!636q39766251:example.com",
+      "sender": "@example:example.org",
+      "origin_server_ts": 1432735824653,
+      "unsigned": {
+        "age": 1234
+      },
+      "state_key": ""
+    },
+    {
+      "content": {
+        "membership": "join",
+        "avatar_url": "mxc://example.org/SEsfnsuifSDFSSEF",
+        "displayname": "Alice Margatroid"
+      },
+      "type": "m.room.member",
+      "event_id": "$143273582443PhrSn:example.org",
+      "room_id": "!636q39766251:example.com",
+      "sender": "@example:example.org",
+      "origin_server_ts": 1432735824653,
+      "unsigned": {
+        "age": 1234
+      },
+      "state_key": "@alice:example.org"
+    }
+  ]
+}
+
+

Server Notices

+

The API to send notices is as follows:

+
POST /_synapse/admin/v1/send_server_notice
+
+

or:

+
PUT /_synapse/admin/v1/send_server_notice/{txnId}
+
+

You will need to authenticate with an access token for an admin user.

+

When using the PUT form, retransmissions with the same transaction ID will be +ignored in the same way as with PUT /_matrix/client/r0/rooms/{roomId}/send/{eventType}/{txnId}.

+

The request body should look something like the following:

+
{
+    "user_id": "@target_user:server_name",
+    "content": {
+        "msgtype": "m.text",
+        "body": "This is my message"
+    }
+}
+
+

You can optionally include the following additional parameters:

+
    +
  • type: the type of event. Defaults to m.room.message.
  • +
  • state_key: Setting this will result in a state event being sent.
  • +
+

Once the notice has been sent, the API will return the following response:

+
{
+    "event_id": "<event_id>"
+}
+
+

Note that server notices must be enabled in homeserver.yaml before this API +can be used. See server_notices.md for more information.

+

Deprecated: Shutdown room API

+

The old Shutdown room API is deprecated and will be removed in a future release. +See the new Delete Room API for more details.

+

Shuts down a room, preventing new joins and moves local users and room aliases automatically +to a new room. The new room will be created with the user specified by the +new_room_user_id parameter as room administrator and will contain a message +explaining what happened. Users invited to the new room will have power level +-10 by default, and thus be unable to speak. The old room's power levels will be changed to +disallow any further invites or joins.

+

The local server will only have the power to move local user and room aliases to +the new room. Users on other servers will be unaffected.

+

API

+

You will need to authenticate with an access token for an admin user.

+

URL

+

POST /_synapse/admin/v1/shutdown_room/{room_id}

+

URL Parameters

+
    +
  • room_id - The ID of the room (e.g !someroom:example.com)
  • +
+

JSON Body Parameters

+
    +
  • new_room_user_id - Required. A string representing the user ID of the user that will admin +the new room that all users in the old room will be moved to.
  • +
  • room_name - Optional. A string representing the name of the room that new users will be +invited to.
  • +
  • message - Optional. A string containing the first message that will be sent as +new_room_user_id in the new room. Ideally this will clearly convey why the +original room was shut down.
  • +
+

If not specified, the default value of room_name is "Content Violation +Notification". The default value of message is "Sharing illegal content on +othis server is not permitted and rooms in violation will be blocked."

+

Response Parameters

+
    +
  • kicked_users - An integer number representing the number of users that +were kicked.
  • +
  • failed_to_kick_users - An integer number representing the number of users +that were not kicked.
  • +
  • local_aliases - An array of strings representing the local aliases that were migrated from +the old room to the new.
  • +
  • new_room_id - A string representing the room ID of the new room.
  • +
+

Example

+

Request:

+
POST /_synapse/admin/v1/shutdown_room/!somebadroom%3Aexample.com
+
+{
+    "new_room_user_id": "@someuser:example.com",
+    "room_name": "Content Violation Notification",
+    "message": "Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service."
+}
+
+

Response:

+
{
+    "kicked_users": 5,
+    "failed_to_kick_users": 0,
+    "local_aliases": ["#badroom:example.com", "#evilsaloon:example.com],
+    "new_room_id": "!newroomid:example.com",
+},
+
+

Undoing room shutdowns

+

Note: This guide may be outdated by the time you read it. By nature of room shutdowns being performed at the database level, +the structure can and does change without notice.

+

First, it's important to understand that a room shutdown is very destructive. Undoing a shutdown is not as simple as pretending it +never happened - work has to be done to move forward instead of resetting the past. In fact, in some cases it might not be possible +to recover at all:

+
    +
  • If the room was invite-only, your users will need to be re-invited.
  • +
  • If the room no longer has any members at all, it'll be impossible to rejoin.
  • +
  • The first user to rejoin will have to do so via an alias on a different server.
  • +
+

With all that being said, if you still want to try and recover the room:

+
    +
  1. For safety reasons, shut down Synapse.
  2. +
  3. In the database, run DELETE FROM blocked_rooms WHERE room_id = '!example:example.org'; +
      +
    • For caution: it's recommended to run this in a transaction: BEGIN; DELETE ...;, verify you got 1 result, then COMMIT;.
    • +
    • The room ID is the same one supplied to the shutdown room API, not the Content Violation room.
    • +
    +
  4. +
  5. Restart Synapse.
  6. +
+

You will have to manually handle, if you so choose, the following:

+
    +
  • Aliases that would have been redirected to the Content Violation room.
  • +
  • Users that would have been booted from the room (and will have been force-joined to the Content Violation room).
  • +
  • Removal of the Content Violation room if desired.
  • +
+

Users' media usage statistics

+

Returns information about all local media usage of users. Gives the +possibility to filter them by time and user.

+

The API is:

+
GET /_synapse/admin/v1/statistics/users/media
+
+

To use it, you will need to authenticate by providing an access_token +for a server admin: see Admin API.

+

A response body like the following is returned:

+
{
+  "users": [
+    {
+      "displayname": "foo_user_0",
+      "media_count": 2,
+      "media_length": 134,
+      "user_id": "@foo_user_0:test"
+    },
+    {
+      "displayname": "foo_user_1",
+      "media_count": 2,
+      "media_length": 134,
+      "user_id": "@foo_user_1:test"
+    }
+  ],
+  "next_token": 3,
+  "total": 10
+}
+
+

To paginate, check for next_token and if present, call the endpoint +again with from set to the value of next_token. This will return a new page.

+

If the endpoint does not return a next_token then there are no more +reports to paginate through.

+

Parameters

+

The following parameters should be set in the URL:

+
    +
  • limit: string representing a positive integer - Is optional but is +used for pagination, denoting the maximum number of items to return +in this call. Defaults to 100.
  • +
  • from: string representing a positive integer - Is optional but used for pagination, +denoting the offset in the returned results. This should be treated as an opaque value +and not explicitly set to anything other than the return value of next_token from a +previous call. Defaults to 0.
  • +
  • order_by - string - The method in which to sort the returned list of users. Valid values are: +
      +
    • user_id - Users are ordered alphabetically by user_id. This is the default.
    • +
    • displayname - Users are ordered alphabetically by displayname.
    • +
    • media_length - Users are ordered by the total size of uploaded media in bytes. +Smallest to largest.
    • +
    • media_count - Users are ordered by number of uploaded media. Smallest to largest.
    • +
    +
  • +
  • from_ts - string representing a positive integer - Considers only +files created at this timestamp or later. Unix timestamp in ms.
  • +
  • until_ts - string representing a positive integer - Considers only +files created at this timestamp or earlier. Unix timestamp in ms.
  • +
  • search_term - string - Filter users by their user ID localpart or displayname. +The search term can be found in any part of the string. +Defaults to no filtering.
  • +
  • dir - string - Direction of order. Either f for forwards or b for backwards. +Setting this value to b will reverse the above sort order. Defaults to f.
  • +
+

Response

+

The following fields are returned in the JSON response body:

+
    +
  • users - An array of objects, each containing information +about the user and their local media. Objects contain the following fields: +
      +
    • displayname - string - Displayname of this user.
    • +
    • media_count - integer - Number of uploaded media by this user.
    • +
    • media_length - integer - Size of uploaded media in bytes by this user.
    • +
    • user_id - string - Fully-qualified user ID (ex. @user:server.com).
    • +
    +
  • +
  • next_token - integer - Opaque value used for pagination. See above.
  • +
  • total - integer - Total number of users after filtering.
  • +
+

User Admin API

+

Query User Account

+

This API returns information about a specific user account.

+

The api is:

+
GET /_synapse/admin/v2/users/<user_id>
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

It returns a JSON body like the following:

+
{
+    "displayname": "User",
+    "threepids": [
+        {
+            "medium": "email",
+            "address": "<user_mail_1>"
+        },
+        {
+            "medium": "email",
+            "address": "<user_mail_2>"
+        }
+    ],
+    "avatar_url": "<avatar_url>",
+    "admin": 0,
+    "deactivated": 0,
+    "shadow_banned": 0,
+    "password_hash": "$2b$12$p9B4GkqYdRTPGD",
+    "creation_ts": 1560432506,
+    "appservice_id": null,
+    "consent_server_notice_sent": null,
+    "consent_version": null
+}
+
+

URL parameters:

+
    +
  • user_id: fully-qualified user id: for example, @user:server.com.
  • +
+

Create or modify Account

+

This API allows an administrator to create or modify a user account with a +specific user_id.

+

This api is:

+
PUT /_synapse/admin/v2/users/<user_id>
+
+

with a body of:

+
{
+    "password": "user_password",
+    "displayname": "User",
+    "threepids": [
+        {
+            "medium": "email",
+            "address": "<user_mail_1>"
+        },
+        {
+            "medium": "email",
+            "address": "<user_mail_2>"
+        }
+    ],
+    "avatar_url": "<avatar_url>",
+    "admin": false,
+    "deactivated": false
+}
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

URL parameters:

+
    +
  • user_id: fully-qualified user id: for example, @user:server.com.
  • +
+

Body parameters:

+
    +
  • +

    password, optional. If provided, the user's password is updated and all +devices are logged out.

    +
  • +
  • +

    displayname, optional, defaults to the value of user_id.

    +
  • +
  • +

    threepids, optional, allows setting the third-party IDs (email, msisdn) +belonging to a user.

    +
  • +
  • +

    avatar_url, optional, must be a +MXC URI.

    +
  • +
  • +

    admin, optional, defaults to false.

    +
  • +
  • +

    deactivated, optional. If unspecified, deactivation state will be left +unchanged on existing accounts and set to false for new accounts. +A user cannot be erased by deactivating with this API. For details on +deactivating users see Deactivate Account.

    +
  • +
+

If the user already exists then optional parameters default to the current value.

+

In order to re-activate an account deactivated must be set to false. If +users do not login via single-sign-on, a new password must be provided.

+

List Accounts

+

This API returns all local user accounts. +By default, the response is ordered by ascending user ID.

+
GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

A response body like the following is returned:

+
{
+    "users": [
+        {
+            "name": "<user_id1>",
+            "is_guest": 0,
+            "admin": 0,
+            "user_type": null,
+            "deactivated": 0,
+            "shadow_banned": 0,
+            "displayname": "<User One>",
+            "avatar_url": null
+        }, {
+            "name": "<user_id2>",
+            "is_guest": 0,
+            "admin": 1,
+            "user_type": null,
+            "deactivated": 0,
+            "shadow_banned": 0,
+            "displayname": "<User Two>",
+            "avatar_url": "<avatar_url>"
+        }
+    ],
+    "next_token": "100",
+    "total": 200
+}
+
+

To paginate, check for next_token and if present, call the endpoint again +with from set to the value of next_token. This will return a new page.

+

If the endpoint does not return a next_token then there are no more users +to paginate through.

+

Parameters

+

The following parameters should be set in the URL:

+
    +
  • +

    user_id - Is optional and filters to only return users with user IDs +that contain this value. This parameter is ignored when using the name parameter.

    +
  • +
  • +

    name - Is optional and filters to only return users with user ID localparts +or displaynames that contain this value.

    +
  • +
  • +

    guests - string representing a bool - Is optional and if false will exclude guest users. +Defaults to true to include guest users.

    +
  • +
  • +

    deactivated - string representing a bool - Is optional and if true will include deactivated users. +Defaults to false to exclude deactivated users.

    +
  • +
  • +

    limit - string representing a positive integer - Is optional but is used for pagination, +denoting the maximum number of items to return in this call. Defaults to 100.

    +
  • +
  • +

    from - string representing a positive integer - Is optional but used for pagination, +denoting the offset in the returned results. This should be treated as an opaque value and +not explicitly set to anything other than the return value of next_token from a previous call. +Defaults to 0.

    +
  • +
  • +

    order_by - The method by which to sort the returned list of users. +If the ordered field has duplicates, the second order is always by ascending name, +which guarantees a stable ordering. Valid values are:

    +
      +
    • name - Users are ordered alphabetically by name. This is the default.
    • +
    • is_guest - Users are ordered by is_guest status.
    • +
    • admin - Users are ordered by admin status.
    • +
    • user_type - Users are ordered alphabetically by user_type.
    • +
    • deactivated - Users are ordered by deactivated status.
    • +
    • shadow_banned - Users are ordered by shadow_banned status.
    • +
    • displayname - Users are ordered alphabetically by displayname.
    • +
    • avatar_url - Users are ordered alphabetically by avatar URL.
    • +
    +
  • +
  • +

    dir - Direction of media order. Either f for forwards or b for backwards. +Setting this value to b will reverse the above sort order. Defaults to f.

    +
  • +
+

Caution. The database only has indexes on the columns name and created_ts. +This means that if a different sort order is used (is_guest, admin, +user_type, deactivated, shadow_banned, avatar_url or displayname), +this can cause a large load on the database, especially for large environments.

+

Response

+

The following fields are returned in the JSON response body:

+
    +
  • +

    users - An array of objects, each containing information about an user. +User objects contain the following fields:

    +
      +
    • name - string - Fully-qualified user ID (ex. @user:server.com).
    • +
    • is_guest - bool - Status if that user is a guest account.
    • +
    • admin - bool - Status if that user is a server administrator.
    • +
    • user_type - string - Type of the user. Normal users are type None. +This allows user type specific behaviour. There are also types support and bot.
    • +
    • deactivated - bool - Status if that user has been marked as deactivated.
    • +
    • shadow_banned - bool - Status if that user has been marked as shadow banned.
    • +
    • displayname - string - The user's display name if they have set one.
    • +
    • avatar_url - string - The user's avatar URL if they have set one.
    • +
    +
  • +
  • +

    next_token: string representing a positive integer - Indication for pagination. See above.

    +
  • +
  • +

    total - integer - Total number of media.

    +
  • +
+

Query current sessions for a user

+

This API returns information about the active sessions for a specific user.

+

The endpoints are:

+
GET /_synapse/admin/v1/whois/<user_id>
+
+

and:

+
GET /_matrix/client/r0/admin/whois/<userId>
+
+

See also: Client Server +API Whois.

+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

It returns a JSON body like the following:

+
{
+    "user_id": "<user_id>",
+    "devices": {
+        "": {
+            "sessions": [
+                {
+                    "connections": [
+                        {
+                            "ip": "1.2.3.4",
+                            "last_seen": 1417222374433,
+                            "user_agent": "Mozilla/5.0 ..."
+                        },
+                        {
+                            "ip": "1.2.3.10",
+                            "last_seen": 1417222374500,
+                            "user_agent": "Dalvik/2.1.0 ..."
+                        }
+                    ]
+                }
+            ]
+        }
+    }
+}
+
+

last_seen is measured in milliseconds since the Unix epoch.

+

Deactivate Account

+

This API deactivates an account. It removes active access tokens, resets the +password, and deletes third-party IDs (to prevent the user requesting a +password reset).

+

It can also mark the user as GDPR-erased. This means messages sent by the +user will still be visible by anyone that was in the room when these messages +were sent, but hidden from users joining the room afterwards.

+

The api is:

+
POST /_synapse/admin/v1/deactivate/<user_id>
+
+

with a body of:

+
{
+    "erase": true
+}
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

The erase parameter is optional and defaults to false. +An empty body may be passed for backwards compatibility.

+

The following actions are performed when deactivating an user:

+
    +
  • Try to unpind 3PIDs from the identity server
  • +
  • Remove all 3PIDs from the homeserver
  • +
  • Delete all devices and E2EE keys
  • +
  • Delete all access tokens
  • +
  • Delete the password hash
  • +
  • Removal from all rooms the user is a member of
  • +
  • Remove the user from the user directory
  • +
  • Reject all pending invites
  • +
  • Remove all account validity information related to the user
  • +
+

The following additional actions are performed during deactivation if erase +is set to true:

+
    +
  • Remove the user's display name
  • +
  • Remove the user's avatar URL
  • +
  • Mark the user as erased
  • +
+

Reset password

+

Changes the password of another user. This will automatically log the user out of all their devices.

+

The api is:

+
POST /_synapse/admin/v1/reset_password/<user_id>
+
+

with a body of:

+
{
+   "new_password": "<secret>",
+   "logout_devices": true
+}
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

The parameter new_password is required. +The parameter logout_devices is optional and defaults to true.

+

Get whether a user is a server administrator or not

+

The api is:

+
GET /_synapse/admin/v1/users/<user_id>/admin
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

A response body like the following is returned:

+
{
+    "admin": true
+}
+
+

Change whether a user is a server administrator or not

+

Note that you cannot demote yourself.

+

The api is:

+
PUT /_synapse/admin/v1/users/<user_id>/admin
+
+

with a body of:

+
{
+    "admin": true
+}
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

List room memberships of a user

+

Gets a list of all room_id that a specific user_id is member.

+

The API is:

+
GET /_synapse/admin/v1/users/<user_id>/joined_rooms
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

A response body like the following is returned:

+
    {
+        "joined_rooms": [
+            "!DuGcnbhHGaSZQoNQR:matrix.org",
+            "!ZtSaPCawyWtxfWiIy:matrix.org"
+        ],
+        "total": 2
+    }
+
+

The server returns the list of rooms of which the user and the server +are member. If the user is local, all the rooms of which the user is +member are returned.

+

Parameters

+

The following parameters should be set in the URL:

+
    +
  • user_id - fully qualified: for example, @user:server.com.
  • +
+

Response

+

The following fields are returned in the JSON response body:

+
    +
  • joined_rooms - An array of room_id.
  • +
  • total - Number of rooms.
  • +
+

List media of a user

+

Gets a list of all local media that a specific user_id has created. +By default, the response is ordered by descending creation date and ascending media ID. +The newest media is on top. You can change the order with parameters +order_by and dir.

+

The API is:

+
GET /_synapse/admin/v1/users/<user_id>/media
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

A response body like the following is returned:

+
{
+  "media": [
+    {
+      "created_ts": 100400,
+      "last_access_ts": null,
+      "media_id": "qXhyRzulkwLsNHTbpHreuEgo",
+      "media_length": 67,
+      "media_type": "image/png",
+      "quarantined_by": null,
+      "safe_from_quarantine": false,
+      "upload_name": "test1.png"
+    },
+    {
+      "created_ts": 200400,
+      "last_access_ts": null,
+      "media_id": "FHfiSnzoINDatrXHQIXBtahw",
+      "media_length": 67,
+      "media_type": "image/png",
+      "quarantined_by": null,
+      "safe_from_quarantine": false,
+      "upload_name": "test2.png"
+    }
+  ],
+  "next_token": 3,
+  "total": 2
+}
+
+

To paginate, check for next_token and if present, call the endpoint again +with from set to the value of next_token. This will return a new page.

+

If the endpoint does not return a next_token then there are no more +reports to paginate through.

+

Parameters

+

The following parameters should be set in the URL:

+
    +
  • +

    user_id - string - fully qualified: for example, @user:server.com.

    +
  • +
  • +

    limit: string representing a positive integer - Is optional but is used for pagination, +denoting the maximum number of items to return in this call. Defaults to 100.

    +
  • +
  • +

    from: string representing a positive integer - Is optional but used for pagination, +denoting the offset in the returned results. This should be treated as an opaque value and +not explicitly set to anything other than the return value of next_token from a previous call. +Defaults to 0.

    +
  • +
  • +

    order_by - The method by which to sort the returned list of media. +If the ordered field has duplicates, the second order is always by ascending media_id, +which guarantees a stable ordering. Valid values are:

    +
      +
    • media_id - Media are ordered alphabetically by media_id.
    • +
    • upload_name - Media are ordered alphabetically by name the media was uploaded with.
    • +
    • created_ts - Media are ordered by when the content was uploaded in ms. +Smallest to largest. This is the default.
    • +
    • last_access_ts - Media are ordered by when the content was last accessed in ms. +Smallest to largest.
    • +
    • media_length - Media are ordered by length of the media in bytes. +Smallest to largest.
    • +
    • media_type - Media are ordered alphabetically by MIME-type.
    • +
    • quarantined_by - Media are ordered alphabetically by the user ID that +initiated the quarantine request for this media.
    • +
    • safe_from_quarantine - Media are ordered by the status if this media is safe +from quarantining.
    • +
    +
  • +
  • +

    dir - Direction of media order. Either f for forwards or b for backwards. +Setting this value to b will reverse the above sort order. Defaults to f.

    +
  • +
+

If neither order_by nor dir is set, the default order is newest media on top +(corresponds to order_by = created_ts and dir = b).

+

Caution. The database only has indexes on the columns media_id, +user_id and created_ts. This means that if a different sort order is used +(upload_name, last_access_ts, media_length, media_type, +quarantined_by or safe_from_quarantine), this can cause a large load on the +database, especially for large environments.

+

Response

+

The following fields are returned in the JSON response body:

+
    +
  • +

    media - An array of objects, each containing information about a media. +Media objects contain the following fields:

    +
      +
    • +

      created_ts - integer - Timestamp when the content was uploaded in ms.

      +
    • +
    • +

      last_access_ts - integer - Timestamp when the content was last accessed in ms.

      +
    • +
    • +

      media_id - string - The id used to refer to the media.

      +
    • +
    • +

      media_length - integer - Length of the media in bytes.

      +
    • +
    • +

      media_type - string - The MIME-type of the media.

      +
    • +
    • +

      quarantined_by - string - The user ID that initiated the quarantine request +for this media.

      +
    • +
    • +

      safe_from_quarantine - bool - Status if this media is safe from quarantining.

      +
    • +
    • +

      upload_name - string - The name the media was uploaded with.

      +
    • +
    +
  • +
  • +

    next_token: integer - Indication for pagination. See above.

    +
  • +
  • +

    total - integer - Total number of media.

    +
  • +
+

Login as a user

+

Get an access token that can be used to authenticate as that user. Useful for +when admins wish to do actions on behalf of a user.

+

The API is:

+
POST /_synapse/admin/v1/users/<user_id>/login
+{}
+
+

An optional valid_until_ms field can be specified in the request body as an +integer timestamp that specifies when the token should expire. By default tokens +do not expire.

+

A response body like the following is returned:

+
{
+    "access_token": "<opaque_access_token_string>"
+}
+
+

This API does not generate a new device for the user, and so will not appear +their /devices list, and in general the target user should not be able to +tell they have been logged in as.

+

To expire the token call the standard /logout API with the token.

+

Note: The token will expire if the admin user calls /logout/all from any +of their devices, but the token will not expire if the target user does the +same.

+

User devices

+

List all devices

+

Gets information about all devices for a specific user_id.

+

The API is:

+
GET /_synapse/admin/v2/users/<user_id>/devices
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

A response body like the following is returned:

+
{
+  "devices": [
+    {
+      "device_id": "QBUAZIFURK",
+      "display_name": "android",
+      "last_seen_ip": "1.2.3.4",
+      "last_seen_ts": 1474491775024,
+      "user_id": "<user_id>"
+    },
+    {
+      "device_id": "AUIECTSRND",
+      "display_name": "ios",
+      "last_seen_ip": "1.2.3.5",
+      "last_seen_ts": 1474491775025,
+      "user_id": "<user_id>"
+    }
+  ],
+  "total": 2
+}
+
+

Parameters

+

The following parameters should be set in the URL:

+
    +
  • user_id - fully qualified: for example, @user:server.com.
  • +
+

Response

+

The following fields are returned in the JSON response body:

+
    +
  • +

    devices - An array of objects, each containing information about a device. +Device objects contain the following fields:

    +
      +
    • device_id - Identifier of device.
    • +
    • display_name - Display name set by the user for this device. +Absent if no name has been set.
    • +
    • last_seen_ip - The IP address where this device was last seen. +(May be a few minutes out of date, for efficiency reasons).
    • +
    • last_seen_ts - The timestamp (in milliseconds since the unix epoch) when this +devices was last seen. (May be a few minutes out of date, for efficiency reasons).
    • +
    • user_id - Owner of device.
    • +
    +
  • +
  • +

    total - Total number of user's devices.

    +
  • +
+

Delete multiple devices

+

Deletes the given devices for a specific user_id, and invalidates +any access token associated with them.

+

The API is:

+
POST /_synapse/admin/v2/users/<user_id>/delete_devices
+
+{
+  "devices": [
+    "QBUAZIFURK",
+    "AUIECTSRND"
+  ],
+}
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

An empty JSON dict is returned.

+

Parameters

+

The following parameters should be set in the URL:

+
    +
  • user_id - fully qualified: for example, @user:server.com.
  • +
+

The following fields are required in the JSON request body:

+
    +
  • devices - The list of device IDs to delete.
  • +
+

Show a device

+

Gets information on a single device, by device_id for a specific user_id.

+

The API is:

+
GET /_synapse/admin/v2/users/<user_id>/devices/<device_id>
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

A response body like the following is returned:

+
{
+  "device_id": "<device_id>",
+  "display_name": "android",
+  "last_seen_ip": "1.2.3.4",
+  "last_seen_ts": 1474491775024,
+  "user_id": "<user_id>"
+}
+
+

Parameters

+

The following parameters should be set in the URL:

+
    +
  • user_id - fully qualified: for example, @user:server.com.
  • +
  • device_id - The device to retrieve.
  • +
+

Response

+

The following fields are returned in the JSON response body:

+
    +
  • device_id - Identifier of device.
  • +
  • display_name - Display name set by the user for this device. +Absent if no name has been set.
  • +
  • last_seen_ip - The IP address where this device was last seen. +(May be a few minutes out of date, for efficiency reasons).
  • +
  • last_seen_ts - The timestamp (in milliseconds since the unix epoch) when this +devices was last seen. (May be a few minutes out of date, for efficiency reasons).
  • +
  • user_id - Owner of device.
  • +
+

Update a device

+

Updates the metadata on the given device_id for a specific user_id.

+

The API is:

+
PUT /_synapse/admin/v2/users/<user_id>/devices/<device_id>
+
+{
+  "display_name": "My other phone"
+}
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

An empty JSON dict is returned.

+

Parameters

+

The following parameters should be set in the URL:

+
    +
  • user_id - fully qualified: for example, @user:server.com.
  • +
  • device_id - The device to update.
  • +
+

The following fields are required in the JSON request body:

+
    +
  • display_name - The new display name for this device. If not given, +the display name is unchanged.
  • +
+

Delete a device

+

Deletes the given device_id for a specific user_id, +and invalidates any access token associated with it.

+

The API is:

+
DELETE /_synapse/admin/v2/users/<user_id>/devices/<device_id>
+
+{}
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

An empty JSON dict is returned.

+

Parameters

+

The following parameters should be set in the URL:

+
    +
  • user_id - fully qualified: for example, @user:server.com.
  • +
  • device_id - The device to delete.
  • +
+

List all pushers

+

Gets information about all pushers for a specific user_id.

+

The API is:

+
GET /_synapse/admin/v1/users/<user_id>/pushers
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

A response body like the following is returned:

+
{
+  "pushers": [
+    {
+      "app_display_name":"HTTP Push Notifications",
+      "app_id":"m.http",
+      "data": {
+        "url":"example.com"
+      },
+      "device_display_name":"pushy push",
+      "kind":"http",
+      "lang":"None",
+      "profile_tag":"",
+      "pushkey":"a@example.com"
+    }
+  ],
+  "total": 1
+}
+
+

Parameters

+

The following parameters should be set in the URL:

+
    +
  • user_id - fully qualified: for example, @user:server.com.
  • +
+

Response

+

The following fields are returned in the JSON response body:

+
    +
  • +

    pushers - An array containing the current pushers for the user

    +
      +
    • +

      app_display_name - string - A string that will allow the user to identify +what application owns this pusher.

      +
    • +
    • +

      app_id - string - This is a reverse-DNS style identifier for the application. +Max length, 64 chars.

      +
    • +
    • +

      data - A dictionary of information for the pusher implementation itself.

      +
        +
      • +

        url - string - Required if kind is http. The URL to use to send +notifications to.

        +
      • +
      • +

        format - string - The format to use when sending notifications to the +Push Gateway.

        +
      • +
      +
    • +
    • +

      device_display_name - string - A string that will allow the user to identify +what device owns this pusher.

      +
    • +
    • +

      profile_tag - string - This string determines which set of device specific rules +this pusher executes.

      +
    • +
    • +

      kind - string - The kind of pusher. "http" is a pusher that sends HTTP pokes.

      +
    • +
    • +

      lang - string - The preferred language for receiving notifications +(e.g. 'en' or 'en-US')

      +
    • +
    • +

      profile_tag - string - This string determines which set of device specific rules +this pusher executes.

      +
    • +
    • +

      pushkey - string - This is a unique identifier for this pusher. +Max length, 512 bytes.

      +
    • +
    +
  • +
  • +

    total - integer - Number of pushers.

    +
  • +
+

See also the +Client-Server API Spec on pushers.

+

Shadow-banning users

+

Shadow-banning is a useful tool for moderating malicious or egregiously abusive users. +A shadow-banned users receives successful responses to their client-server API requests, +but the events are not propagated into rooms. This can be an effective tool as it +(hopefully) takes longer for the user to realise they are being moderated before +pivoting to another account.

+

Shadow-banning a user should be used as a tool of last resort and may lead to confusing +or broken behaviour for the client. A shadow-banned user will not receive any +notification and it is generally more appropriate to ban or kick abusive users. +A shadow-banned user will be unable to contact anyone on the server.

+

The API is:

+
POST /_synapse/admin/v1/users/<user_id>/shadow_ban
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

An empty JSON dict is returned.

+

Parameters

+

The following parameters should be set in the URL:

+
    +
  • user_id - The fully qualified MXID: for example, @user:server.com. The user must +be local.
  • +
+

Override ratelimiting for users

+

This API allows to override or disable ratelimiting for a specific user. +There are specific APIs to set, get and delete a ratelimit.

+

Get status of ratelimit

+

The API is:

+
GET /_synapse/admin/v1/users/<user_id>/override_ratelimit
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

A response body like the following is returned:

+
{
+  "messages_per_second": 0,
+  "burst_count": 0
+}
+
+

Parameters

+

The following parameters should be set in the URL:

+
    +
  • user_id - The fully qualified MXID: for example, @user:server.com. The user must +be local.
  • +
+

Response

+

The following fields are returned in the JSON response body:

+
    +
  • messages_per_second - integer - The number of actions that can +be performed in a second. 0 mean that ratelimiting is disabled for this user.
  • +
  • burst_count - integer - How many actions that can be performed before +being limited.
  • +
+

If no custom ratelimit is set, an empty JSON dict is returned.

+
{}
+
+

Set ratelimit

+

The API is:

+
POST /_synapse/admin/v1/users/<user_id>/override_ratelimit
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

A response body like the following is returned:

+
{
+  "messages_per_second": 0,
+  "burst_count": 0
+}
+
+

Parameters

+

The following parameters should be set in the URL:

+
    +
  • user_id - The fully qualified MXID: for example, @user:server.com. The user must +be local.
  • +
+

Body parameters:

+
    +
  • messages_per_second - positive integer, optional. The number of actions that can +be performed in a second. Defaults to 0.
  • +
  • burst_count - positive integer, optional. How many actions that can be performed +before being limited. Defaults to 0.
  • +
+

To disable users' ratelimit set both values to 0.

+

Response

+

The following fields are returned in the JSON response body:

+
    +
  • messages_per_second - integer - The number of actions that can +be performed in a second.
  • +
  • burst_count - integer - How many actions that can be performed before +being limited.
  • +
+

Delete ratelimit

+

The API is:

+
DELETE /_synapse/admin/v1/users/<user_id>/override_ratelimit
+
+

To use it, you will need to authenticate by providing an access_token for a +server admin: Admin API

+

An empty JSON dict is returned.

+
{}
+
+

Parameters

+

The following parameters should be set in the URL:

+
    +
  • user_id - The fully qualified MXID: for example, @user:server.com. The user must +be local.
  • +
+

Version API

+

This API returns the running Synapse version and the Python version +on which Synapse is being run. This is useful when a Synapse instance +is behind a proxy that does not forward the 'Server' header (which also +contains Synapse version information).

+

The api is:

+
GET /_synapse/admin/v1/server_version
+
+

It returns a JSON body like the following:

+
{
+    "server_version": "0.99.2rc1 (b=develop, abcdef123)",
+    "python_version": "3.6.8"
+}
+
+

Using the synapse manhole

+

The "manhole" allows server administrators to access a Python shell on a running +Synapse installation. This is a very powerful mechanism for administration and +debugging.

+

Security Warning

+

Note that this will give administrative access to synapse to all users with +shell access to the server. It should therefore not be enabled in +environments where untrusted users have shell access.

+
+

To enable it, first uncomment the manhole listener configuration in +homeserver.yaml. The configuration is slightly different if you're using docker.

+

Docker config

+

If you are using Docker, set bind_addresses to ['0.0.0.0'] as shown:

+
listeners:
+  - port: 9000
+    bind_addresses: ['0.0.0.0']
+    type: manhole
+
+

When using docker run to start the server, you will then need to change the command to the following to include the +manhole port forwarding. The -p 127.0.0.1:9000:9000 below is important: it +ensures that access to the manhole is only possible for local users.

+
docker run -d --name synapse \
+    --mount type=volume,src=synapse-data,dst=/data \
+    -p 8008:8008 \
+    -p 127.0.0.1:9000:9000 \
+    matrixdotorg/synapse:latest
+
+

Native config

+

If you are not using docker, set bind_addresses to ['::1', '127.0.0.1'] as shown. +The bind_addresses in the example below is important: it ensures that access to the +manhole is only possible for local users).

+
listeners:
+  - port: 9000
+    bind_addresses: ['::1', '127.0.0.1']
+    type: manhole
+
+

Accessing synapse manhole

+

Then restart synapse, and point an ssh client at port 9000 on localhost, using +the username matrix:

+
ssh -p9000 matrix@localhost
+
+

The password is rabbithole.

+

This gives a Python REPL in which hs gives access to the +synapse.server.HomeServer object - which in turn gives access to many other +parts of the process.

+

Note that any call which returns a coroutine will need to be wrapped in ensureDeferred.

+

As a simple example, retrieving an event from the database:

+
>>> from twisted.internet import defer
+>>> defer.ensureDeferred(hs.get_datastore().get_event('$1416420717069yeQaw:matrix.org'))
+<Deferred at 0x7ff253fc6998 current result: <FrozenEvent event_id='$1416420717069yeQaw:matrix.org', type='m.room.create', state_key=''>>
+
+

How to monitor Synapse metrics using Prometheus

+
    +
  1. +

    Install Prometheus:

    +

    Follow instructions at +http://prometheus.io/docs/introduction/install/

    +
  2. +
  3. +

    Enable Synapse metrics:

    +

    There are two methods of enabling metrics in Synapse.

    +

    The first serves the metrics as a part of the usual web server and +can be enabled by adding the "metrics" resource to the existing +listener as such:

    +
      resources:
    +    - names:
    +      - client
    +      - metrics
    +
    +

    This provides a simple way of adding metrics to your Synapse +installation, and serves under /_synapse/metrics. If you do not +wish your metrics be publicly exposed, you will need to either +filter it out at your load balancer, or use the second method.

    +

    The second method runs the metrics server on a different port, in a +different thread to Synapse. This can make it more resilient to +heavy load meaning metrics cannot be retrieved, and can be exposed +to just internal networks easier. The served metrics are available +over HTTP only, and will be available at /_synapse/metrics.

    +

    Add a new listener to homeserver.yaml:

    +
      listeners:
    +    - type: metrics
    +      port: 9000
    +      bind_addresses:
    +        - '0.0.0.0'
    +
    +

    For both options, you will need to ensure that enable_metrics is +set to True.

    +
  4. +
  5. +

    Restart Synapse.

    +
  6. +
  7. +

    Add a Prometheus target for Synapse.

    +

    It needs to set the metrics_path to a non-default value (under +scrape_configs):

    +
      - job_name: "synapse"
    +    scrape_interval: 15s
    +    metrics_path: "/_synapse/metrics"
    +    static_configs:
    +      - targets: ["my.server.here:port"]
    +
    +

    where my.server.here is the IP address of Synapse, and port is +the listener port configured with the metrics resource.

    +

    If your prometheus is older than 1.5.2, you will need to replace +static_configs in the above with target_groups.

    +
  8. +
  9. +

    Restart Prometheus.

    +
  10. +
  11. +

    Consider using the grafana dashboard +and required recording rules

    +
  12. +
+

Monitoring workers

+

To monitor a Synapse installation using +workers, +every worker needs to be monitored independently, in addition to +the main homeserver process. This is because workers don't send +their metrics to the main homeserver process, but expose them +directly (if they are configured to do so).

+

To allow collecting metrics from a worker, you need to add a +metrics listener to its configuration, by adding the following +under worker_listeners:

+
  - type: metrics
+    bind_address: ''
+    port: 9101
+
+

The bind_address and port parameters should be set so that +the resulting listener can be reached by prometheus, and they +don't clash with an existing worker. +With this example, the worker's metrics would then be available +on http://127.0.0.1:9101.

+

Example Prometheus target for Synapse with workers:

+
  - job_name: "synapse"
+    scrape_interval: 15s
+    metrics_path: "/_synapse/metrics"
+    static_configs:
+      - targets: ["my.server.here:port"]
+        labels:
+          instance: "my.server"
+          job: "master"
+          index: 1
+      - targets: ["my.workerserver.here:port"]
+        labels:
+          instance: "my.server"
+          job: "generic_worker"
+          index: 1
+      - targets: ["my.workerserver.here:port"]
+        labels:
+          instance: "my.server"
+          job: "generic_worker"
+          index: 2
+      - targets: ["my.workerserver.here:port"]
+        labels:
+          instance: "my.server"
+          job: "media_repository"
+          index: 1
+
+

Labels (instance, job, index) can be defined as anything. +The labels are used to group graphs in grafana.

+

Renaming of metrics & deprecation of old names in 1.2

+

Synapse 1.2 updates the Prometheus metrics to match the naming +convention of the upstream prometheus_client. The old names are +considered deprecated and will be removed in a future version of +Synapse.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
New NameOld Name
python_gc_objects_collected_totalpython_gc_objects_collected
python_gc_objects_uncollectable_totalpython_gc_objects_uncollectable
python_gc_collections_totalpython_gc_collections
process_cpu_seconds_totalprocess_cpu_seconds
synapse_federation_client_sent_transactions_totalsynapse_federation_client_sent_transactions
synapse_federation_client_events_processed_totalsynapse_federation_client_events_processed
synapse_event_processing_loop_count_totalsynapse_event_processing_loop_count
synapse_event_processing_loop_room_count_totalsynapse_event_processing_loop_room_count
synapse_util_metrics_block_count_totalsynapse_util_metrics_block_count
synapse_util_metrics_block_time_seconds_totalsynapse_util_metrics_block_time_seconds
synapse_util_metrics_block_ru_utime_seconds_totalsynapse_util_metrics_block_ru_utime_seconds
synapse_util_metrics_block_ru_stime_seconds_totalsynapse_util_metrics_block_ru_stime_seconds
synapse_util_metrics_block_db_txn_count_totalsynapse_util_metrics_block_db_txn_count
synapse_util_metrics_block_db_txn_duration_seconds_totalsynapse_util_metrics_block_db_txn_duration_seconds
synapse_util_metrics_block_db_sched_duration_seconds_totalsynapse_util_metrics_block_db_sched_duration_seconds
synapse_background_process_start_count_totalsynapse_background_process_start_count
synapse_background_process_ru_utime_seconds_totalsynapse_background_process_ru_utime_seconds
synapse_background_process_ru_stime_seconds_totalsynapse_background_process_ru_stime_seconds
synapse_background_process_db_txn_count_totalsynapse_background_process_db_txn_count
synapse_background_process_db_txn_duration_seconds_totalsynapse_background_process_db_txn_duration_seconds
synapse_background_process_db_sched_duration_seconds_totalsynapse_background_process_db_sched_duration_seconds
synapse_storage_events_persisted_events_totalsynapse_storage_events_persisted_events
synapse_storage_events_persisted_events_sep_totalsynapse_storage_events_persisted_events_sep
synapse_storage_events_state_delta_totalsynapse_storage_events_state_delta
synapse_storage_events_state_delta_single_event_totalsynapse_storage_events_state_delta_single_event
synapse_storage_events_state_delta_reuse_delta_totalsynapse_storage_events_state_delta_reuse_delta
synapse_federation_server_received_pdus_totalsynapse_federation_server_received_pdus
synapse_federation_server_received_edus_totalsynapse_federation_server_received_edus
synapse_handler_presence_notified_presence_totalsynapse_handler_presence_notified_presence
synapse_handler_presence_federation_presence_out_totalsynapse_handler_presence_federation_presence_out
synapse_handler_presence_presence_updates_totalsynapse_handler_presence_presence_updates
synapse_handler_presence_timers_fired_totalsynapse_handler_presence_timers_fired
synapse_handler_presence_federation_presence_totalsynapse_handler_presence_federation_presence
synapse_handler_presence_bump_active_time_totalsynapse_handler_presence_bump_active_time
synapse_federation_client_sent_edus_totalsynapse_federation_client_sent_edus
synapse_federation_client_sent_pdu_destinations_count_totalsynapse_federation_client_sent_pdu_destinations:count
synapse_federation_client_sent_pdu_destinations_totalsynapse_federation_client_sent_pdu_destinations:total
synapse_handlers_appservice_events_processed_totalsynapse_handlers_appservice_events_processed
synapse_notifier_notified_events_totalsynapse_notifier_notified_events
synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter_totalsynapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter
synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter_totalsynapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter
synapse_http_httppusher_http_pushes_processed_totalsynapse_http_httppusher_http_pushes_processed
synapse_http_httppusher_http_pushes_failed_totalsynapse_http_httppusher_http_pushes_failed
synapse_http_httppusher_badge_updates_processed_totalsynapse_http_httppusher_badge_updates_processed
synapse_http_httppusher_badge_updates_failed_totalsynapse_http_httppusher_badge_updates_failed
+

Removal of deprecated metrics & time based counters becoming histograms in 0.31.0

+

The duplicated metrics deprecated in Synapse 0.27.0 have been removed.

+

All time duration-based metrics have been changed to be seconds. This +affects:

+ + + + + + +
msec -> sec metrics
python_gc_time
python_twisted_reactor_tick_time
synapse_storage_query_time
synapse_storage_schedule_time
synapse_storage_transaction_time
+

Several metrics have been changed to be histograms, which sort entries +into buckets and allow better analysis. The following metrics are now +histograms:

+ + + + + + + + +
Altered metrics
python_gc_time
python_twisted_reactor_pending_calls
python_twisted_reactor_tick_time
synapse_http_server_response_time_seconds
synapse_storage_query_time
synapse_storage_schedule_time
synapse_storage_transaction_time
+

Block and response metrics renamed for 0.27.0

+

Synapse 0.27.0 begins the process of rationalising the duplicate +*:count metrics reported for the resource tracking for code blocks and +HTTP requests.

+

At the same time, the corresponding *:total metrics are being renamed, +as the :total suffix no longer makes sense in the absence of a +corresponding :count metric.

+

To enable a graceful migration path, this release just adds new names +for the metrics being renamed. A future release will remove the old +ones.

+

The following table shows the new metrics, and the old metrics which +they are replacing.

+ + + + + + + + + + + + + + + + + + + + + + +
New nameOld name
synapse_util_metrics_block_countsynapse_util_metrics_block_timer:count
synapse_util_metrics_block_countsynapse_util_metrics_block_ru_utime:count
synapse_util_metrics_block_countsynapse_util_metrics_block_ru_stime:count
synapse_util_metrics_block_countsynapse_util_metrics_block_db_txn_count:count
synapse_util_metrics_block_countsynapse_util_metrics_block_db_txn_duration:count
synapse_util_metrics_block_time_secondssynapse_util_metrics_block_timer:total
synapse_util_metrics_block_ru_utime_secondssynapse_util_metrics_block_ru_utime:total
synapse_util_metrics_block_ru_stime_secondssynapse_util_metrics_block_ru_stime:total
synapse_util_metrics_block_db_txn_countsynapse_util_metrics_block_db_txn_count:total
synapse_util_metrics_block_db_txn_duration_secondssynapse_util_metrics_block_db_txn_duration:total
synapse_http_server_response_countsynapse_http_server_requests
synapse_http_server_response_countsynapse_http_server_response_time:count
synapse_http_server_response_countsynapse_http_server_response_ru_utime:count
synapse_http_server_response_countsynapse_http_server_response_ru_stime:count
synapse_http_server_response_countsynapse_http_server_response_db_txn_count:count
synapse_http_server_response_countsynapse_http_server_response_db_txn_duration:count
synapse_http_server_response_time_secondssynapse_http_server_response_time:total
synapse_http_server_response_ru_utime_secondssynapse_http_server_response_ru_utime:total
synapse_http_server_response_ru_stime_secondssynapse_http_server_response_ru_stime:total
synapse_http_server_response_db_txn_countsynapse_http_server_response_db_txn_count:total
synapse_http_server_response_db_txn_duration_secondssynapse_http_server_response_db_txn_duration:total
+

Standard Metric Names

+

As of synapse version 0.18.2, the format of the process-wide metrics has +been changed to fit prometheus standard naming conventions. Additionally +the units have been changed to seconds, from miliseconds.

+ + + + +
New nameOld name
process_cpu_user_seconds_totalprocess_resource_utime / 1000
process_cpu_system_seconds_totalprocess_resource_stime / 1000
process_open_fds (no 'type' label)process_fds
+

The python-specific counts of garbage collector performance have been +renamed.

+ + + + +
New nameOld name
python_gc_timereactor_gc_time
python_gc_unreachable_totalreactor_gc_unreachable
python_gc_countsreactor_gc_counts
+

The twisted-specific reactor metrics have been renamed.

+ + + +
New nameOld name
python_twisted_reactor_pending_callsreactor_pending_calls
python_twisted_reactor_tick_timereactor_tick_time
+
+

Contributing

+

Welcome to Synapse

+

This document aims to get you started with contributing to this repo!

+ +

1. Who can contribute to Synapse?

+

Everyone is welcome to contribute code to matrix.org +projects, provided that they are willing to +license their contributions under the same license as the project itself. We +follow a simple 'inbound=outbound' model for contributions: the act of +submitting an 'inbound' contribution means that the contributor agrees to +license the code under the same terms as the project's overall 'outbound' +license - in our case, this is almost always Apache Software License v2 (see +LICENSE).

+

2. What do I need?

+

The code of Synapse is written in Python 3. To do pretty much anything, you'll need a recent version of Python 3.

+

The source code of Synapse is hosted on GitHub. You will also need a recent version of git.

+

For some tests, you will need a recent version of Docker.

+

3. Get the source.

+

The preferred and easiest way to contribute changes is to fork the relevant +project on GitHub, and then create a pull request to ask us to pull your +changes into our repo.

+

Please base your changes on the develop branch.

+
git clone git@github.com:YOUR_GITHUB_USER_NAME/synapse.git
+git checkout develop
+
+

If you need help getting started with git, this is beyond the scope of the document, but you +can find many good git tutorials on the web.

+

4. Install the dependencies

+

Under Unix (macOS, Linux, BSD, ...)

+

Once you have installed Python 3 and added the source, please open a terminal and +setup a virtualenv, as follows:

+
cd path/where/you/have/cloned/the/repository
+python3 -m venv ./env
+source ./env/bin/activate
+pip install -e ".[all,lint,mypy,test]"
+pip install tox
+
+

This will install the developer dependencies for the project.

+

Under Windows

+

TBD

+

5. Get in touch.

+

Join our developer community on Matrix: #synapse-dev:matrix.org !

+

6. Pick an issue.

+

Fix your favorite problem or perhaps find a Good First Issue +to work on.

+

7. Turn coffee and documentation into code and documentation!

+

Synapse's code style is documented here. Please follow +it, including the conventions for the sample configuration +file.

+

There is a growing amount of documentation located in the docs +directory. This documentation is intended primarily for sysadmins running their +own Synapse instance, as well as developers interacting externally with +Synapse. docs/dev exists primarily to house documentation for +Synapse developers. docs/admin_api houses documentation +regarding Synapse's Admin API, which is used mostly by sysadmins and external +service developers.

+

If you add new files added to either of these folders, please use GitHub-Flavoured +Markdown.

+

Some documentation also exists in Synapse's GitHub +Wiki, although this is primarily +contributed to by community authors.

+

8. Test, test, test!

+

+

While you're developing and before submitting a patch, you'll +want to test your code.

+

Run the linters.

+

The linters look at your code and do two things:

+
    +
  • ensure that your code follows the coding style adopted by the project;
  • +
  • catch a number of errors in your code.
  • +
+

They're pretty fast, don't hesitate!

+
source ./env/bin/activate
+./scripts-dev/lint.sh
+
+

Note that this script will modify your files to fix styling errors. +Make sure that you have saved all your files.

+

If you wish to restrict the linters to only the files changed since the last commit +(much faster!), you can instead run:

+
source ./env/bin/activate
+./scripts-dev/lint.sh -d
+
+

Or if you know exactly which files you wish to lint, you can instead run:

+
source ./env/bin/activate
+./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder
+
+

Run the unit tests.

+

The unit tests run parts of Synapse, including your changes, to see if anything +was broken. They are slower than the linters but will typically catch more errors.

+
source ./env/bin/activate
+trial tests
+
+

If you wish to only run some unit tests, you may specify +another module instead of tests - or a test class or a method:

+
source ./env/bin/activate
+trial tests.rest.admin.test_room tests.handlers.test_admin.ExfiltrateData.test_invite
+
+

If your tests fail, you may wish to look at the logs:

+
less _trial_temp/test.log
+
+

Run the integration tests.

+

The integration tests are a more comprehensive suite of tests. They +run a full version of Synapse, including your changes, to check if +anything was broken. They are slower than the unit tests but will +typically catch more errors.

+

The following command will let you run the integration test with the most common +configuration:

+
$ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:py37
+
+

This configuration should generally cover your needs. For more details about other configurations, see documentation in the SyTest repo.

+

9. Submit your patch.

+

Once you're happy with your patch, it's time to prepare a Pull Request.

+

To prepare a Pull Request, please:

+
    +
  1. verify that all the tests pass, including the coding style;
  2. +
  3. sign off your contribution;
  4. +
  5. git push your commit to your fork of Synapse;
  6. +
  7. on GitHub, create the Pull Request;
  8. +
  9. add a changelog entry and push it to your Pull Request;
  10. +
  11. for most contributors, that's all - however, if you are a member of the organization matrix-org, on GitHub, please request a review from matrix.org / Synapse Core.
  12. +
+

Changelog

+

All changes, even minor ones, need a corresponding changelog / newsfragment +entry. These are managed by Towncrier.

+

To create a changelog entry, make a new file in the changelog.d directory named +in the format of PRnumber.type. The type can be one of the following:

+
    +
  • feature
  • +
  • bugfix
  • +
  • docker (for updates to the Docker image)
  • +
  • doc (for updates to the documentation)
  • +
  • removal (also used for deprecations)
  • +
  • misc (for internal-only changes)
  • +
+

This file will become part of our changelog at the next +release, so the content of the file should be a short description of your +change in the same style as the rest of the changelog. The file can contain Markdown +formatting, and should end with a full stop (.) or an exclamation mark (!) for +consistency.

+

Adding credits to the changelog is encouraged, we value your +contributions and would like to have you shouted out in the release notes!

+

For example, a fix in PR #1234 would have its changelog entry in +changelog.d/1234.bugfix, and contain content like:

+
+

The security levels of Florbs are now validated when received +via the /federation/florb endpoint. Contributed by Jane Matrix.

+
+

If there are multiple pull requests involved in a single bugfix/feature/etc, +then the content for each changelog.d file should be the same. Towncrier will +merge the matching files together into a single changelog entry when we come to +release.

+

How do I know what to call the changelog file before I create the PR?

+

Obviously, you don't know if you should call your newsfile +1234.bugfix or 5678.bugfix until you create the PR, which leads to a +chicken-and-egg problem.

+

There are two options for solving this:

+
    +
  1. +

    Open the PR without a changelog file, see what number you got, and then +add the changelog file to your branch (see Updating your pull +request), or:

    +
  2. +
  3. +

    Look at the list of all +issues/PRs, add one to the +highest number you see, and quickly open the PR before somebody else claims +your number.

    +

    This +script +might be helpful if you find yourself doing this a lot.

    +
  4. +
+

Sorry, we know it's a bit fiddly, but it's really helpful for us when we come +to put together a release!

+

Debian changelog

+

Changes which affect the debian packaging files (in debian) are an +exception to the rule that all changes require a changelog.d file.

+

In this case, you will need to add an entry to the debian changelog for the +next release. For this, run the following command:

+
dch
+
+

This will make up a new version number (if there isn't already an unreleased +version in flight), and open an editor where you can add a new changelog entry. +(Our release process will ensure that the version number and maintainer name is +corrected for the release.)

+

If your change affects both the debian packaging and files outside the debian +directory, you will need both a regular newsfragment and an entry in the +debian changelog. (Though typically such changes should be submitted as two +separate pull requests.)

+

Sign off

+

In order to have a concrete record that your contribution is intentional +and you agree to license it under the same terms as the project's license, we've adopted the +same lightweight approach that the Linux Kernel +submitting patches process, +Docker, and many other +projects use: the DCO (Developer Certificate of Origin: +http://developercertificate.org/). This is a simple declaration that you wrote +the contribution or otherwise have the right to contribute it to Matrix:

+
Developer Certificate of Origin
+Version 1.1
+
+Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
+660 York Street, Suite 102,
+San Francisco, CA 94110 USA
+
+Everyone is permitted to copy and distribute verbatim copies of this
+license document, but changing it is not allowed.
+
+Developer's Certificate of Origin 1.1
+
+By making a contribution to this project, I certify that:
+
+(a) The contribution was created in whole or in part by me and I
+    have the right to submit it under the open source license
+    indicated in the file; or
+
+(b) The contribution is based upon previous work that, to the best
+    of my knowledge, is covered under an appropriate open source
+    license and I have the right under that license to submit that
+    work with modifications, whether created in whole or in part
+    by me, under the same open source license (unless I am
+    permitted to submit under a different license), as indicated
+    in the file; or
+
+(c) The contribution was provided directly to me by some other
+    person who certified (a), (b) or (c) and I have not modified
+    it.
+
+(d) I understand and agree that this project and the contribution
+    are public and that a record of the contribution (including all
+    personal information I submit with it, including my sign-off) is
+    maintained indefinitely and may be redistributed consistent with
+    this project or the open source license(s) involved.
+
+

If you agree to this for your contribution, then all that's needed is to +include the line in your commit or pull request comment:

+
Signed-off-by: Your Name <your@email.example.org>
+
+

We accept contributions under a legally identifiable name, such as +your name on government documentation or common-law names (names +claimed by legitimate usage or repute). Unfortunately, we cannot +accept anonymous contributions at this time.

+

Git allows you to add this signoff automatically when using the -s +flag to git commit, which uses the name and email set in your +user.name and user.email git configs.

+

10. Turn feedback into better code.

+

Once the Pull Request is opened, you will see a few things:

+
    +
  1. our automated CI (Continuous Integration) pipeline will run (again) the linters, the unit tests, the integration tests and more;
  2. +
  3. one or more of the developers will take a look at your Pull Request and offer feedback.
  4. +
+

From this point, you should:

+
    +
  1. Look at the results of the CI pipeline. +
      +
    • If there is any error, fix the error.
    • +
    +
  2. +
  3. If a developer has requested changes, make these changes and let us know if it is ready for a developer to review again.
  4. +
  5. Create a new commit with the changes. +
      +
    • Please do NOT overwrite the history. New commits make the reviewer's life easier.
    • +
    • Push this commits to your Pull Request.
    • +
    +
  6. +
  7. Back to 1.
  8. +
+

Once both the CI and the developers are happy, the patch will be merged into Synapse and released shortly!

+

11. Find a new issue.

+

By now, you know the drill!

+

Notes for maintainers on merging PRs etc

+

There are some notes for those with commit access to the project on how we +manage git here.

+

Conclusion

+

That's it! Matrix is a very open and collaborative project as you might expect +given our obsession with open communication. If we're going to successfully +matrix together all the fragmented communication technologies out there we are +reliant on contributions and collaboration from the community to do so. So +please get involved - and we hope you have as much fun hacking on Matrix as we +do!

+

Code Style

+

Formatting tools

+

The Synapse codebase uses a number of code formatting tools in order to +quickly and automatically check for formatting (and sometimes logical) +errors in code.

+

The necessary tools are detailed below.

+

First install them with:

+
pip install -e ".[lint,mypy]"
+
+
    +
  • +

    black

    +

    The Synapse codebase uses black +as an opinionated code formatter, ensuring all comitted code is +properly formatted.

    +

    Have black auto-format your code (it shouldn't change any +functionality) with:

    +
    black . --exclude="\.tox|build|env"
    +
    +
  • +
  • +

    flake8

    +

    flake8 is a code checking tool. We require code to pass flake8 +before being merged into the codebase.

    +

    Check all application and test code with:

    +
    flake8 synapse tests
    +
    +
  • +
  • +

    isort

    +

    isort ensures imports are nicely formatted, and can suggest and +auto-fix issues such as double-importing.

    +

    Auto-fix imports with:

    +
    isort -rc synapse tests
    +
    +

    -rc means to recursively search the given directories.

    +
  • +
+

It's worth noting that modern IDEs and text editors can run these tools +automatically on save. It may be worth looking into whether this +functionality is supported in your editor for a more convenient +development workflow. It is not, however, recommended to run flake8 on +save as it takes a while and is very resource intensive.

+

General rules

+
    +
  • Naming: +
      +
    • Use camel case for class and type names
    • +
    • Use underscores for functions and variables.
    • +
    +
  • +
  • Docstrings: should follow the google code +style. +See the +examples +in the sphinx documentation.
  • +
  • Imports: +
      +
    • +

      Imports should be sorted by isort as described above.

      +
    • +
    • +

      Prefer to import classes and functions rather than packages or +modules.

      +

      Example:

      +
      from synapse.types import UserID
      +...
      +user_id = UserID(local, server)
      +
      +

      is preferred over:

      +
      from synapse import types
      +...
      +user_id = types.UserID(local, server)
      +
      +

      (or any other variant).

      +

      This goes against the advice in the Google style guide, but it +means that errors in the name are caught early (at import time).

      +
    • +
    • +

      Avoid wildcard imports (from synapse.types import *) and +relative imports (from .types import UserID).

      +
    • +
    +
  • +
+

Configuration file format

+

The sample configuration file acts as a +reference to Synapse's configuration options for server administrators. +Remember that many readers will be unfamiliar with YAML and server +administration in general, so that it is important that the file be as +easy to understand as possible, which includes following a consistent +format.

+

Some guidelines follow:

+
    +
  • +

    Sections should be separated with a heading consisting of a single +line prefixed and suffixed with ##. There should be two blank +lines before the section header, and one after.

    +
  • +
  • +

    Each option should be listed in the file with the following format:

    +
      +
    • +

      A comment describing the setting. Each line of this comment +should be prefixed with a hash (#) and a space.

      +

      The comment should describe the default behaviour (ie, what +happens if the setting is omitted), as well as what the effect +will be if the setting is changed.

      +

      Often, the comment end with something like "uncomment the +following to ".

      +
    • +
    • +

      A line consisting of only #.

      +
    • +
    • +

      A commented-out example setting, prefixed with only #.

      +

      For boolean (on/off) options, convention is that this example +should be the opposite to the default (so the comment will end +with "Uncomment the following to enable [or disable] +." For other options, the example should give some +non-default value which is likely to be useful to the reader.

      +
    • +
    +
  • +
  • +

    There should be a blank line between each option.

    +
  • +
  • +

    Where several settings are grouped into a single dict, avoid the +convention where the whole block is commented out, resulting in +comment lines starting # #, as this is hard to read and confusing +to edit. Instead, leave the top-level config option uncommented, and +follow the conventions above for sub-options. Ensure that your code +correctly handles the top-level option being set to None (as it +will be if no sub-options are enabled).

    +
  • +
  • +

    Lines should be wrapped at 80 characters.

    +
  • +
  • +

    Use two-space indents.

    +
  • +
  • +

    true and false are spelt thus (as opposed to True, etc.)

    +
  • +
  • +

    Use single quotes (') rather than double-quotes (") or backticks +(`) to refer to configuration options.

    +
  • +
+

Example:

+
## Frobnication ##
+
+# The frobnicator will ensure that all requests are fully frobnicated.
+# To enable it, uncomment the following.
+#
+#frobnicator_enabled: true
+
+# By default, the frobnicator will frobnicate with the default frobber.
+# The following will make it use an alternative frobber.
+#
+#frobincator_frobber: special_frobber
+
+# Settings for the frobber
+#
+frobber:
+  # frobbing speed. Defaults to 1.
+  #
+  #speed: 10
+
+  # frobbing distance. Defaults to 1000.
+  #
+  #distance: 100
+
+

Note that the sample configuration is generated from the synapse code +and is maintained by a script, scripts-dev/generate_sample_config. +Making sure that the output from this script matches the desired format +is left as an exercise for the reader!

+

Some notes on how we use git

+

On keeping the commit history clean

+

In an ideal world, our git commit history would be a linear progression of +commits each of which contains a single change building on what came +before. Here, by way of an arbitrary example, is the top of git log --graph b2dba0607:

+clean git graph +

Note how the commit comment explains clearly what is changing and why. Also +note the absence of merge commits, as well as the absence of commits called +things like (to pick a few culprits): +“pep8”, “fix broken +test”, +“oops”, +“typo”, or “Who's +the president?”.

+

There are a number of reasons why keeping a clean commit history is a good +thing:

+
    +
  • +

    From time to time, after a change lands, it turns out to be necessary to +revert it, or to backport it to a release branch. Those operations are +much easier when the change is contained in a single commit.

    +
  • +
  • +

    Similarly, it's much easier to answer questions like “is the fix for +/publicRooms on the release branch?” if that change consists of a single +commit.

    +
  • +
  • +

    Likewise: “what has changed on this branch in the last week?” is much +clearer without merges and “pep8” commits everywhere.

    +
  • +
  • +

    Sometimes we need to figure out where a bug got introduced, or some +behaviour changed. One way of doing that is with git bisect: pick an +arbitrary commit between the known good point and the known bad point, and +see how the code behaves. However, that strategy fails if the commit you +chose is the middle of someone's epic branch in which they broke the world +before putting it back together again.

    +
  • +
+

One counterargument is that it is sometimes useful to see how a PR evolved as +it went through review cycles. This is true, but that information is always +available via the GitHub UI (or via the little-known refs/pull +namespace).

+

Of course, in reality, things are more complicated than that. We have release +branches as well as develop and master, and we deliberately merge changes +between them. Bugs often slip through and have to be fixed later. That's all +fine: this not a cast-iron rule which must be obeyed, but an ideal to aim +towards.

+

Merges, squashes, rebases: wtf?

+

Ok, so that's what we'd like to achieve. How do we achieve it?

+

The TL;DR is: when you come to merge a pull request, you probably want to +“squash and merge”:

+

squash and merge.

+

(This applies whether you are merging your own PR, or that of another +contributor.)

+

“Squash and merge”1 takes all of the changes in the +PR, and bundles them into a single commit. GitHub gives you the opportunity to +edit the commit message before you confirm, and normally you should do so, +because the default will be useless (again: * woops typo is not a useful +thing to keep in the historical record).

+

The main problem with this approach comes when you have a series of pull +requests which build on top of one another: as soon as you squash-merge the +first PR, you'll end up with a stack of conflicts to resolve in all of the +others. In general, it's best to avoid this situation in the first place by +trying not to have multiple related PRs in flight at the same time. Still, +sometimes that's not possible and doing a regular merge is the lesser evil.

+

Another occasion in which a regular merge makes more sense is a PR where you've +deliberately created a series of commits each of which makes sense in its own +right. For example: a PR which gradually propagates a refactoring operation +through the codebase, or a +PR which is the culmination of several other +PRs. In this case the ability +to figure out when a particular change/bug was introduced could be very useful.

+

Ultimately: this is not a hard-and-fast-rule. If in doubt, ask yourself “do +each of the commits I am about to merge make sense in their own right”, but +remember that we're just doing our best to balance “keeping the commit history +clean” with other factors.

+

Git branching model

+

A lot +of +words have been +written in the past about git branching models (no really, a +lot). I tend to +think the whole thing is overblown. Fundamentally, it's not that +complicated. Here's how we do it.

+

Let's start with a picture:

+

branching model

+

It looks complicated, but it's really not. There's one basic rule: anyone is +free to merge from any more-stable branch to any less-stable branch at +any time2. (The principle behind this is that if a +change is good enough for the more-stable branch, then it's also good enough go +put in a less-stable branch.)

+

Meanwhile, merging (or squashing, as per the above) from a less-stable to a +more-stable branch is a deliberate action in which you want to publish a change +or a set of changes to (some subset of) the world: for example, this happens +when a PR is landed, or as part of our release process.

+

So, what counts as a more- or less-stable branch? A little reflection will show +that our active branches are ordered thus, from more-stable to less-stable:

+
    +
  • master (tracks our last release).
  • +
  • release-vX.Y.Z (the branch where we prepare the next release)3.
  • +
  • PR branches which are targeting the release.
  • +
  • develop (our "mainline" branch containing our bleeding-edge).
  • +
  • regular PR branches.
  • +
+

The corollary is: if you have a bugfix that needs to land in both +release-vX.Y.Z and develop, then you should base your PR on +release-vX.Y.Z, get it merged there, and then merge from release-vX.Y.Z to +develop. (If a fix lands in develop and we later need it in a +release-branch, we can of course cherry-pick it, but landing it in the release +branch first helps reduce the chance of annoying conflicts.)

+
+

[1]: “Squash and merge” is GitHub's term for this +operation. Given that there is no merge involved, I'm not convinced it's the +most intuitive name. ^

+

[2]: Well, anyone with commit access.^

+

[3]: Very, very occasionally (I think this has happened once in +the history of Synapse), we've had two releases in flight at once. Obviously, +release-v1.2.3 is more-stable than release-v1.3.0. ^

+

OpenTracing

+

Background

+

OpenTracing is a semi-standard being adopted by a number of distributed +tracing platforms. It is a common api for facilitating vendor-agnostic +tracing instrumentation. That is, we can use the OpenTracing api and +select one of a number of tracer implementations to do the heavy lifting +in the background. Our current selected implementation is Jaeger.

+

OpenTracing is a tool which gives an insight into the causal +relationship of work done in and between servers. The servers each track +events and report them to a centralised server - in Synapse's case: +Jaeger. The basic unit used to represent events is the span. The span +roughly represents a single piece of work that was done and the time at +which it occurred. A span can have child spans, meaning that the work of +the child had to be completed for the parent span to complete, or it can +have follow-on spans which represent work that is undertaken as a result +of the parent but is not depended on by the parent to in order to +finish.

+

Since this is undertaken in a distributed environment a request to +another server, such as an RPC or a simple GET, can be considered a span +(a unit or work) for the local server. This causal link is what +OpenTracing aims to capture and visualise. In order to do this metadata +about the local server's span, i.e the 'span context', needs to be +included with the request to the remote.

+

It is up to the remote server to decide what it does with the spans it +creates. This is called the sampling policy and it can be configured +through Jaeger's settings.

+

For OpenTracing concepts see +https://opentracing.io/docs/overview/what-is-tracing/.

+

For more information about Jaeger's implementation see +https://www.jaegertracing.io/docs/

+

Setting up OpenTracing

+

To receive OpenTracing spans, start up a Jaeger server. This can be done +using docker like so:

+
docker run -d --name jaeger \
+  -p 6831:6831/udp \
+  -p 6832:6832/udp \
+  -p 5778:5778 \
+  -p 16686:16686 \
+  -p 14268:14268 \
+  jaegertracing/all-in-one:1
+
+

Latest documentation is probably at +https://www.jaegertracing.io/docs/latest/getting-started.

+

Enable OpenTracing in Synapse

+

OpenTracing is not enabled by default. It must be enabled in the +homeserver config by uncommenting the config options under opentracing +as shown in the sample config. For example:

+
opentracing:
+  enabled: true
+  homeserver_whitelist:
+    - "mytrustedhomeserver.org"
+    - "*.myotherhomeservers.com"
+
+

Homeserver whitelisting

+

The homeserver whitelist is configured using regular expressions. A list +of regular expressions can be given and their union will be compared +when propagating any spans contexts to another homeserver.

+

Though it's mostly safe to send and receive span contexts to and from +untrusted users since span contexts are usually opaque ids it can lead +to two problems, namely:

+
    +
  • If the span context is marked as sampled by the sending homeserver +the receiver will sample it. Therefore two homeservers with wildly +different sampling policies could incur higher sampling counts than +intended.
  • +
  • Sending servers can attach arbitrary data to spans, known as +'baggage'. For safety this has been disabled in Synapse but that +doesn't prevent another server sending you baggage which will be +logged to OpenTracing's logs.
  • +
+

Configuring Jaeger

+

Sampling strategies can be set as in this document: +https://www.jaegertracing.io/docs/latest/sampling/.

+

Log Contexts

+

To help track the processing of individual requests, synapse uses a +'log context' to track which request it is handling at any given +moment. This is done via a thread-local variable; a logging.Filter is +then used to fish the information back out of the thread-local variable +and add it to each log record.

+

Logcontexts are also used for CPU and database accounting, so that we +can track which requests were responsible for high CPU use or database +activity.

+

The synapse.logging.context module provides a facilities for managing +the current log context (as well as providing the LoggingContextFilter +class).

+

Deferreds make the whole thing complicated, so this document describes +how it all works, and how to write code which follows the rules.

+

##Logcontexts without Deferreds

+

In the absence of any Deferred voodoo, things are simple enough. As with +any code of this nature, the rule is that our function should leave +things as it found them:

+
from synapse.logging import context         # omitted from future snippets
+
+def handle_request(request_id):
+    request_context = context.LoggingContext()
+
+    calling_context = context.set_current_context(request_context)
+    try:
+        request_context.request = request_id
+        do_request_handling()
+        logger.debug("finished")
+    finally:
+        context.set_current_context(calling_context)
+
+def do_request_handling():
+    logger.debug("phew")  # this will be logged against request_id
+
+

LoggingContext implements the context management methods, so the above +can be written much more succinctly as:

+
def handle_request(request_id):
+    with context.LoggingContext() as request_context:
+        request_context.request = request_id
+        do_request_handling()
+        logger.debug("finished")
+
+def do_request_handling():
+    logger.debug("phew")
+
+

Using logcontexts with Deferreds

+

Deferreds --- and in particular, defer.inlineCallbacks --- break the +linear flow of code so that there is no longer a single entry point +where we should set the logcontext and a single exit point where we +should remove it.

+

Consider the example above, where do_request_handling needs to do some +blocking operation, and returns a deferred:

+
@defer.inlineCallbacks
+def handle_request(request_id):
+    with context.LoggingContext() as request_context:
+        request_context.request = request_id
+        yield do_request_handling()
+        logger.debug("finished")
+
+

In the above flow:

+
    +
  • The logcontext is set
  • +
  • do_request_handling is called, and returns a deferred
  • +
  • handle_request yields the deferred
  • +
  • The inlineCallbacks wrapper of handle_request returns a deferred
  • +
+

So we have stopped processing the request (and will probably go on to +start processing the next), without clearing the logcontext.

+

To circumvent this problem, synapse code assumes that, wherever you have +a deferred, you will want to yield on it. To that end, whereever +functions return a deferred, we adopt the following conventions:

+

Rules for functions returning deferreds:

+
+
    +
  • If the deferred is already complete, the function returns with the +same logcontext it started with.
  • +
  • If the deferred is incomplete, the function clears the logcontext +before returning; when the deferred completes, it restores the +logcontext before running any callbacks.
  • +
+
+

That sounds complicated, but actually it means a lot of code (including +the example above) "just works". There are two cases:

+
    +
  • +

    If do_request_handling returns a completed deferred, then the +logcontext will still be in place. In this case, execution will +continue immediately after the yield; the "finished" line will +be logged against the right context, and the with block restores +the original context before we return to the caller.

    +
  • +
  • +

    If the returned deferred is incomplete, do_request_handling clears +the logcontext before returning. The logcontext is therefore clear +when handle_request yields the deferred. At that point, the +inlineCallbacks wrapper adds a callback to the deferred, and +returns another (incomplete) deferred to the caller, and it is safe +to begin processing the next request.

    +

    Once do_request_handling's deferred completes, it will reinstate +the logcontext, before running the callback added by the +inlineCallbacks wrapper. That callback runs the second half of +handle_request, so again the "finished" line will be logged +against the right context, and the with block restores the +original context.

    +
  • +
+

As an aside, it's worth noting that handle_request follows our rules +-though that only matters if the caller has its own logcontext which it +cares about.

+

The following sections describe pitfalls and helpful patterns when +implementing these rules.

+

Always yield your deferreds

+

Whenever you get a deferred back from a function, you should yield on +it as soon as possible. (Returning it directly to your caller is ok too, +if you're not doing inlineCallbacks.) Do not pass go; do not do any +logging; do not call any other functions.

+
@defer.inlineCallbacks
+def fun():
+    logger.debug("starting")
+    yield do_some_stuff()       # just like this
+
+    d = more_stuff()
+    result = yield d            # also fine, of course
+
+    return result
+
+def nonInlineCallbacksFun():
+    logger.debug("just a wrapper really")
+    return do_some_stuff()      # this is ok too - the caller will yield on
+                                # it anyway.
+
+

Provided this pattern is followed all the way back up to the callchain +to where the logcontext was set, this will make things work out ok: +provided do_some_stuff and more_stuff follow the rules above, then +so will fun (as wrapped by inlineCallbacks) and +nonInlineCallbacksFun.

+

It's all too easy to forget to yield: for instance if we forgot that +do_some_stuff returned a deferred, we might plough on regardless. This +leads to a mess; it will probably work itself out eventually, but not +before a load of stuff has been logged against the wrong context. +(Normally, other things will break, more obviously, if you forget to +yield, so this tends not to be a major problem in practice.)

+

Of course sometimes you need to do something a bit fancier with your +Deferreds - not all code follows the linear A-then-B-then-C pattern. +Notes on implementing more complex patterns are in later sections.

+

Where you create a new Deferred, make it follow the rules

+

Most of the time, a Deferred comes from another synapse function. +Sometimes, though, we need to make up a new Deferred, or we get a +Deferred back from external code. We need to make it follow our rules.

+

The easy way to do it is with a combination of defer.inlineCallbacks, +and context.PreserveLoggingContext. Suppose we want to implement +sleep, which returns a deferred which will run its callbacks after a +given number of seconds. That might look like:

+
# not a logcontext-rules-compliant function
+def get_sleep_deferred(seconds):
+    d = defer.Deferred()
+    reactor.callLater(seconds, d.callback, None)
+    return d
+
+

That doesn't follow the rules, but we can fix it by wrapping it with +PreserveLoggingContext and yield ing on it:

+
@defer.inlineCallbacks
+def sleep(seconds):
+    with PreserveLoggingContext():
+        yield get_sleep_deferred(seconds)
+
+

This technique works equally for external functions which return +deferreds, or deferreds we have made ourselves.

+

You can also use context.make_deferred_yieldable, which just does the +boilerplate for you, so the above could be written:

+
def sleep(seconds):
+    return context.make_deferred_yieldable(get_sleep_deferred(seconds))
+
+

Fire-and-forget

+

Sometimes you want to fire off a chain of execution, but not wait for +its result. That might look a bit like this:

+
@defer.inlineCallbacks
+def do_request_handling():
+    yield foreground_operation()
+
+    # *don't* do this
+    background_operation()
+
+    logger.debug("Request handling complete")
+
+@defer.inlineCallbacks
+def background_operation():
+    yield first_background_step()
+    logger.debug("Completed first step")
+    yield second_background_step()
+    logger.debug("Completed second step")
+
+

The above code does a couple of steps in the background after +do_request_handling has finished. The log lines are still logged +against the request_context logcontext, which may or may not be +desirable. There are two big problems with the above, however. The first +problem is that, if background_operation returns an incomplete +Deferred, it will expect its caller to yield immediately, so will have +cleared the logcontext. In this example, that means that 'Request +handling complete' will be logged without any context.

+

The second problem, which is potentially even worse, is that when the +Deferred returned by background_operation completes, it will restore +the original logcontext. There is nothing waiting on that Deferred, so +the logcontext will leak into the reactor and possibly get attached to +some arbitrary future operation.

+

There are two potential solutions to this.

+

One option is to surround the call to background_operation with a +PreserveLoggingContext call. That will reset the logcontext before +starting background_operation (so the context restored when the +deferred completes will be the empty logcontext), and will restore the +current logcontext before continuing the foreground process:

+
@defer.inlineCallbacks
+def do_request_handling():
+    yield foreground_operation()
+
+    # start background_operation off in the empty logcontext, to
+    # avoid leaking the current context into the reactor.
+    with PreserveLoggingContext():
+        background_operation()
+
+    # this will now be logged against the request context
+    logger.debug("Request handling complete")
+
+

Obviously that option means that the operations done in +background_operation would be not be logged against a logcontext +(though that might be fixed by setting a different logcontext via a +with LoggingContext(...) in background_operation).

+

The second option is to use context.run_in_background, which wraps a +function so that it doesn't reset the logcontext even when it returns +an incomplete deferred, and adds a callback to the returned deferred to +reset the logcontext. In other words, it turns a function that follows +the Synapse rules about logcontexts and Deferreds into one which behaves +more like an external function --- the opposite operation to that +described in the previous section. It can be used like this:

+
@defer.inlineCallbacks
+def do_request_handling():
+    yield foreground_operation()
+
+    context.run_in_background(background_operation)
+
+    # this will now be logged against the request context
+    logger.debug("Request handling complete")
+
+

Passing synapse deferreds into third-party functions

+

A typical example of this is where we want to collect together two or +more deferred via defer.gatherResults:

+
d1 = operation1()
+d2 = operation2()
+d3 = defer.gatherResults([d1, d2])
+
+

This is really a variation of the fire-and-forget problem above, in that +we are firing off d1 and d2 without yielding on them. The difference +is that we now have third-party code attached to their callbacks. Anyway +either technique given in the Fire-and-forget +section will work.

+

Of course, the new Deferred returned by gatherResults needs to be +wrapped in order to make it follow the logcontext rules before we can +yield it, as described in Where you create a new Deferred, make it +follow the +rules.

+

So, option one: reset the logcontext before starting the operations to +be gathered:

+
@defer.inlineCallbacks
+def do_request_handling():
+    with PreserveLoggingContext():
+        d1 = operation1()
+        d2 = operation2()
+        result = yield defer.gatherResults([d1, d2])
+
+

In this case particularly, though, option two, of using +context.preserve_fn almost certainly makes more sense, so that +operation1 and operation2 are both logged against the original +logcontext. This looks like:

+
@defer.inlineCallbacks
+def do_request_handling():
+    d1 = context.preserve_fn(operation1)()
+    d2 = context.preserve_fn(operation2)()
+
+    with PreserveLoggingContext():
+        result = yield defer.gatherResults([d1, d2])
+
+

Was all this really necessary?

+

The conventions used work fine for a linear flow where everything +happens in series via defer.inlineCallbacks and yield, but are +certainly tricky to follow for any more exotic flows. It's hard not to +wonder if we could have done something else.

+

We're not going to rewrite Synapse now, so the following is entirely of +academic interest, but I'd like to record some thoughts on an +alternative approach.

+

I briefly prototyped some code following an alternative set of rules. I +think it would work, but I certainly didn't get as far as thinking how +it would interact with concepts as complicated as the cache descriptors.

+

My alternative rules were:

+
    +
  • functions always preserve the logcontext of their caller, whether or +not they are returning a Deferred.
  • +
  • Deferreds returned by synapse functions run their callbacks in the +same context as the function was orignally called in.
  • +
+

The main point of this scheme is that everywhere that sets the +logcontext is responsible for clearing it before returning control to +the reactor.

+

So, for example, if you were the function which started a +with LoggingContext block, you wouldn't yield within it --- instead +you'd start off the background process, and then leave the with block +to wait for it:

+
def handle_request(request_id):
+    with context.LoggingContext() as request_context:
+        request_context.request = request_id
+        d = do_request_handling()
+
+    def cb(r):
+        logger.debug("finished")
+
+    d.addCallback(cb)
+    return d
+
+

(in general, mixing with LoggingContext blocks and +defer.inlineCallbacks in the same function leads to slighly +counter-intuitive code, under this scheme).

+

Because we leave the original with block as soon as the Deferred is +returned (as opposed to waiting for it to be resolved, as we do today), +the logcontext is cleared before control passes back to the reactor; so +if there is some code within do_request_handling which needs to wait +for a Deferred to complete, there is no need for it to worry about +clearing the logcontext before doing so:

+
def handle_request():
+    r = do_some_stuff()
+    r.addCallback(do_some_more_stuff)
+    return r
+
+

--- and provided do_some_stuff follows the rules of returning a +Deferred which runs its callbacks in the original logcontext, all is +happy.

+

The business of a Deferred which runs its callbacks in the original +logcontext isn't hard to achieve --- we have it today, in the shape of +context._PreservingContextDeferred:

+
def do_some_stuff():
+    deferred = do_some_io()
+    pcd = _PreservingContextDeferred(LoggingContext.current_context())
+    deferred.chainDeferred(pcd)
+    return pcd
+
+

It turns out that, thanks to the way that Deferreds chain together, we +automatically get the property of a context-preserving deferred with +defer.inlineCallbacks, provided the final Defered the function +yields on has that property. So we can just write:

+
@defer.inlineCallbacks
+def handle_request():
+    yield do_some_stuff()
+    yield do_some_more_stuff()
+
+

To conclude: I think this scheme would have worked equally well, with +less danger of messing it up, and probably made some more esoteric code +easier to write. But again --- changing the conventions of the entire +Synapse codebase is not a sensible option for the marginal improvement +offered.

+

A note on garbage-collection of Deferred chains

+

It turns out that our logcontext rules do not play nicely with Deferred +chains which get orphaned and garbage-collected.

+

Imagine we have some code that looks like this:

+
listener_queue = []
+
+def on_something_interesting():
+    for d in listener_queue:
+        d.callback("foo")
+
+@defer.inlineCallbacks
+def await_something_interesting():
+    new_deferred = defer.Deferred()
+    listener_queue.append(new_deferred)
+
+    with PreserveLoggingContext():
+        yield new_deferred
+
+

Obviously, the idea here is that we have a bunch of things which are +waiting for an event. (It's just an example of the problem here, but a +relatively common one.)

+

Now let's imagine two further things happen. First of all, whatever was +waiting for the interesting thing goes away. (Perhaps the request times +out, or something even more interesting happens.)

+

Secondly, let's suppose that we decide that the interesting thing is +never going to happen, and we reset the listener queue:

+
def reset_listener_queue():
+    listener_queue.clear()
+
+

So, both ends of the deferred chain have now dropped their references, +and the deferred chain is now orphaned, and will be garbage-collected at +some point. Note that await_something_interesting is a generator +function, and when Python garbage-collects generator functions, it gives +them a chance to clean up by making the yield raise a GeneratorExit +exception. In our case, that means that the __exit__ handler of +PreserveLoggingContext will carefully restore the request context, but +there is now nothing waiting for its return, so the request context is +never cleared.

+

To reiterate, this problem only arises when both ends of a deferred +chain are dropped. Dropping the the reference to a deferred you're +supposed to be calling is probably bad practice, so this doesn't +actually happen too much. Unfortunately, when it does happen, it will +lead to leaked logcontexts which are incredibly hard to track down.

+

Replication Architecture

+

Motivation

+

We'd like to be able to split some of the work that synapse does into +multiple python processes. In theory multiple synapse processes could +share a single postgresql database and we'd scale up by running more +synapse processes. However much of synapse assumes that only one process +is interacting with the database, both for assigning unique identifiers +when inserting into tables, notifying components about new updates, and +for invalidating its caches.

+

So running multiple copies of the current code isn't an option. One way +to run multiple processes would be to have a single writer process and +multiple reader processes connected to the same database. In order to do +this we'd need a way for the reader process to invalidate its in-memory +caches when an update happens on the writer. One way to do this is for +the writer to present an append-only log of updates which the readers +can consume to invalidate their caches and to push updates to listening +clients or pushers.

+

Synapse already stores much of its data as an append-only log so that it +can correctly respond to /sync requests so the amount of code changes +needed to expose the append-only log to the readers should be fairly +minimal.

+

Architecture

+

The Replication Protocol

+

See tcp_replication.md

+

The Slaved DataStore

+

There are read-only version of the synapse storage layer in +synapse/replication/slave/storage that use the response of the +replication API to invalidate their caches.

+

TCP Replication

+

Motivation

+

Previously the workers used an HTTP long poll mechanism to get updates +from the master, which had the problem of causing a lot of duplicate +work on the server. This TCP protocol replaces those APIs with the aim +of increased efficiency.

+

Overview

+

The protocol is based on fire and forget, line based commands. An +example flow would be (where '>' indicates master to worker and +'<' worker to master flows):

+
> SERVER example.com
+< REPLICATE
+> POSITION events master 53 53
+> RDATA events master 54 ["$foo1:bar.com", ...]
+> RDATA events master 55 ["$foo4:bar.com", ...]
+
+

The example shows the server accepting a new connection and sending its identity +with the SERVER command, followed by the client server to respond with the +position of all streams. The server then periodically sends RDATA commands +which have the format RDATA <stream_name> <instance_name> <token> <row>, where +the format of <row> is defined by the individual streams. The +<instance_name> is the name of the Synapse process that generated the data +(usually "master").

+

Error reporting happens by either the client or server sending an ERROR +command, and usually the connection will be closed.

+

Since the protocol is a simple line based, its possible to manually +connect to the server using a tool like netcat. A few things should be +noted when manually using the protocol:

+
    +
  • The federation stream is only available if federation sending has +been disabled on the main process.
  • +
  • The server will only time connections out that have sent a PING +command. If a ping is sent then the connection will be closed if no +further commands are receieved within 15s. Both the client and +server protocol implementations will send an initial PING on +connection and ensure at least one command every 5s is sent (not +necessarily PING).
  • +
  • RDATA commands usually include a numeric token, however if the +stream has multiple rows to replicate per token the server will send +multiple RDATA commands, with all but the last having a token of +batch. See the documentation on commands.RdataCommand for +further details.
  • +
+

Architecture

+

The basic structure of the protocol is line based, where the initial +word of each line specifies the command. The rest of the line is parsed +based on the command. For example, the RDATA command is defined as:

+
RDATA <stream_name> <instance_name> <token> <row_json>
+
+

(Note that <row_json> may contains spaces, but cannot contain +newlines.)

+

Blank lines are ignored.

+

Keep alives

+

Both sides are expected to send at least one command every 5s or so, and +should send a PING command if necessary. If either side do not receive +a command within e.g. 15s then the connection should be closed.

+

Because the server may be connected to manually using e.g. netcat, the +timeouts aren't enabled until an initial PING command is seen. Both +the client and server implementations below send a PING command +immediately on connection to ensure the timeouts are enabled.

+

This ensures that both sides can quickly realize if the tcp connection +has gone and handle the situation appropriately.

+

Start up

+

When a new connection is made, the server:

+
    +
  • Sends a SERVER command, which includes the identity of the server, +allowing the client to detect if its connected to the expected +server
  • +
  • Sends a PING command as above, to enable the client to time out +connections promptly.
  • +
+

The client:

+
    +
  • Sends a NAME command, allowing the server to associate a human +friendly name with the connection. This is optional.
  • +
  • Sends a PING as above
  • +
  • Sends a REPLICATE to get the current position of all streams.
  • +
  • On receipt of a SERVER command, checks that the server name +matches the expected server name.
  • +
+

Error handling

+

If either side detects an error it can send an ERROR command and close +the connection.

+

If the client side loses the connection to the server it should +reconnect, following the steps above.

+

Congestion

+

If the server sends messages faster than the client can consume them the +server will first buffer a (fairly large) number of commands and then +disconnect the client. This ensures that we don't queue up an unbounded +number of commands in memory and gives us a potential oppurtunity to +squawk loudly. When/if the client recovers it can reconnect to the +server and ask for missed messages.

+

Reliability

+

In general the replication stream should be considered an unreliable +transport since e.g. commands are not resent if the connection +disappears.

+

The exception to that are the replication streams, i.e. RDATA commands, +since these include tokens which can be used to restart the stream on +connection errors.

+

The client should keep track of the token in the last RDATA command +received for each stream so that on reconneciton it can start streaming +from the correct place. Note: not all RDATA have valid tokens due to +batching. See RdataCommand for more details.

+

Example

+

An example iteraction is shown below. Each line is prefixed with '>' +or '<' to indicate which side is sending, these are not included on +the wire:

+
* connection established *
+> SERVER localhost:8823
+> PING 1490197665618
+< NAME synapse.app.appservice
+< PING 1490197665618
+< REPLICATE
+> POSITION events master 1 1
+> POSITION backfill master 1 1
+> POSITION caches master 1 1
+> RDATA caches master 2 ["get_user_by_id",["@01register-user:localhost:8823"],1490197670513]
+> RDATA events master 14 ["$149019767112vOHxz:localhost:8823",
+    "!AFDCvgApUmpdfVjIXm:localhost:8823","m.room.guest_access","",null]
+< PING 1490197675618
+> ERROR server stopping
+* connection closed by server *
+
+

The POSITION command sent by the server is used to set the clients +position without needing to send data with the RDATA command.

+

An example of a batched set of RDATA is:

+
> RDATA caches master batch ["get_user_by_id",["@test:localhost:8823"],1490197670513]
+> RDATA caches master batch ["get_user_by_id",["@test2:localhost:8823"],1490197670513]
+> RDATA caches master batch ["get_user_by_id",["@test3:localhost:8823"],1490197670513]
+> RDATA caches master 54 ["get_user_by_id",["@test4:localhost:8823"],1490197670513]
+
+

In this case the client shouldn't advance their caches token until it +sees the the last RDATA.

+

List of commands

+

The list of valid commands, with which side can send it: server (S) or +client (C):

+

SERVER (S)

+

Sent at the start to identify which server the client is talking to

+

RDATA (S)

+

A single update in a stream

+

POSITION (S)

+

On receipt of a POSITION command clients should check if they have missed any +updates, and if so then fetch them out of band. Sent in response to a +REPLICATE command (but can happen at any time).

+

The POSITION command includes the source of the stream. Currently all streams +are written by a single process (usually "master"). If fetching missing +updates via HTTP API, rather than via the DB, then processes should make the +request to the appropriate process.

+

Two positions are included, the "new" position and the last position sent respectively. +This allows servers to tell instances that the positions have advanced but no +data has been written, without clients needlessly checking to see if they +have missed any updates.

+

ERROR (S, C)

+

There was an error

+

PING (S, C)

+

Sent periodically to ensure the connection is still alive

+

NAME (C)

+

Sent at the start by client to inform the server who they are

+

REPLICATE (C)

+

Asks the server for the current position of all streams.

+

USER_SYNC (C)

+

A user has started or stopped syncing on this process.

+

CLEAR_USER_SYNC (C)

+

The server should clear all associated user sync data from the worker.

+

This is used when a worker is shutting down.

+

FEDERATION_ACK (C)

+

Acknowledge receipt of some federation data

+

REMOTE_SERVER_UP (S, C)

+

Inform other processes that a remote server may have come back online.

+

See synapse/replication/tcp/commands.py for a detailed description and +the format of each command.

+

Cache Invalidation Stream

+

The cache invalidation stream is used to inform workers when they need +to invalidate any of their caches in the data store. This is done by +streaming all cache invalidations done on master down to the workers, +assuming that any caches on the workers also exist on the master.

+

Each individual cache invalidation results in a row being sent down +replication, which includes the cache name (the name of the function) +and they key to invalidate. For example:

+
> RDATA caches master 550953771 ["get_user_by_id", ["@bob:example.com"], 1550574873251]
+
+

Alternatively, an entire cache can be invalidated by sending down a null +instead of the key. For example:

+
> RDATA caches master 550953772 ["get_user_by_id", null, 1550574873252]
+
+

However, there are times when a number of caches need to be invalidated +at the same time with the same key. To reduce traffic we batch those +invalidations into a single poke by defining a special cache name that +workers understand to mean to expand to invalidate the correct caches.

+

Currently the special cache names are declared in +synapse/storage/_base.py and are:

+
    +
  1. cs_cache_fake ─ invalidates caches that depend on the current +state
  2. +
+

Internal Documentation

+

This section covers implementation documentation for various parts of Synapse.

+

If a developer is planning to make a change to a feature of Synapse, it can be useful for +general documentation of how that feature is implemented to be available. This saves the +developer time in place of needing to understand how the feature works by reading the +code.

+

Documentation that would be more useful for the perspective of a system administrator, +rather than a developer who's intending to change to code, should instead be placed +under the Usage section of the documentation.

+

How to test SAML as a developer without a server

+

https://capriza.github.io/samling/samling.html (https://github.com/capriza/samling) is a great +resource for being able to tinker with the SAML options within Synapse without needing to +deploy and configure a complicated software stack.

+

To make Synapse (and therefore Riot) use it:

+
    +
  1. Use the samling.html URL above or deploy your own and visit the IdP Metadata tab.
  2. +
  3. Copy the XML to your clipboard.
  4. +
  5. On your Synapse server, create a new file samling.xml next to your homeserver.yaml with +the XML from step 2 as the contents.
  6. +
  7. Edit your homeserver.yaml to include: +
    saml2_config:
    +  sp_config:
    +    allow_unknown_attributes: true  # Works around a bug with AVA Hashes: https://github.com/IdentityPython/pysaml2/issues/388
    +    metadata:
    +      local: ["samling.xml"]   
    +
    +
  8. +
  9. Ensure that your homeserver.yaml has a setting for public_baseurl: +
    public_baseurl: http://localhost:8080/
    +
    +
  10. +
  11. Run apt-get install xmlsec1 and pip install --upgrade --force 'pysaml2>=4.5.0' to ensure +the dependencies are installed and ready to go.
  12. +
  13. Restart Synapse.
  14. +
+

Then in Riot:

+
    +
  1. Visit the login page with a Riot pointing at your homeserver.
  2. +
  3. Click the Single Sign-On button.
  4. +
  5. On the samling page, enter a Name Identifier and add a SAML Attribute for uid=your_localpart. +The response must also be signed.
  6. +
  7. Click "Next".
  8. +
  9. Click "Post Response" (change nothing).
  10. +
  11. You should be logged in.
  12. +
+

If you try and repeat this process, you may be automatically logged in using the information you +gave previously. To fix this, open your developer console (F12 or Ctrl+Shift+I) while on the +samling page and clear the site data. In Chrome, this will be a button on the Application tab.

+

How to test CAS as a developer without a server

+

The django-mama-cas project is an +easy to run CAS implementation built on top of Django.

+

Prerequisites

+
    +
  1. Create a new virtualenv: python3 -m venv <your virtualenv>
  2. +
  3. Activate your virtualenv: source /path/to/your/virtualenv/bin/activate
  4. +
  5. Install Django and django-mama-cas: +
    python -m pip install "django<3" "django-mama-cas==2.4.0"
    +
    +
  6. +
  7. Create a Django project in the current directory: +
    django-admin startproject cas_test .
    +
    +
  8. +
  9. Follow the install directions for django-mama-cas
  10. +
  11. Setup the SQLite database: python manage.py migrate
  12. +
  13. Create a user: +
    python manage.py createsuperuser
    +
    +
      +
    1. Use whatever you want as the username and password.
    2. +
    3. Leave the other fields blank.
    4. +
    +
  14. +
  15. Use the built-in Django test server to serve the CAS endpoints on port 8000: +
    python manage.py runserver
    +
    +
  16. +
+

You should now have a Django project configured to serve CAS authentication with +a single user created.

+

Configure Synapse (and Element) to use CAS

+
    +
  1. Modify your homeserver.yaml to enable CAS and point it to your locally +running Django test server: +
    cas_config:
    +  enabled: true
    +  server_url: "http://localhost:8000"
    +  service_url: "http://localhost:8081"
    +  #displayname_attribute: name
    +  #required_attributes:
    +  #    name: value
    +
    +
  2. +
  3. Restart Synapse.
  4. +
+

Note that the above configuration assumes the homeserver is running on port 8081 +and that the CAS server is on port 8000, both on localhost.

+

Testing the configuration

+

Then in Element:

+
    +
  1. Visit the login page with a Element pointing at your homeserver.
  2. +
  3. Click the Single Sign-On button.
  4. +
  5. Login using the credentials created with createsuperuser.
  6. +
  7. You should be logged in.
  8. +
+

If you want to repeat this process you'll need to manually logout first:

+
    +
  1. http://localhost:8000/admin/
  2. +
  3. Click "logout" in the top right.
  4. +
+

Auth Chain Difference Algorithm

+

The auth chain difference algorithm is used by V2 state resolution, where a +naive implementation can be a significant source of CPU and DB usage.

+

Definitions

+

A state set is a set of state events; e.g. the input of a state resolution +algorithm is a collection of state sets.

+

The auth chain of a set of events are all the events' auth events and their +auth events, recursively (i.e. the events reachable by walking the graph induced +by an event's auth events links).

+

The auth chain difference of a collection of state sets is the union minus the +intersection of the sets of auth chains corresponding to the state sets, i.e an +event is in the auth chain difference if it is reachable by walking the auth +event graph from at least one of the state sets but not from all of the state +sets.

+

Breadth First Walk Algorithm

+

A way of calculating the auth chain difference without calculating the full auth +chains for each state set is to do a parallel breadth first walk (ordered by +depth) of each state set's auth chain. By tracking which events are reachable +from each state set we can finish early if every pending event is reachable from +every state set.

+

This can work well for state sets that have a small auth chain difference, but +can be very inefficient for larger differences. However, this algorithm is still +used if we don't have a chain cover index for the room (e.g. because we're in +the process of indexing it).

+

Chain Cover Index

+

Synapse computes auth chain differences by pre-computing a "chain cover" index +for the auth chain in a room, allowing efficient reachability queries like "is +event A in the auth chain of event B". This is done by assigning every event a +chain ID and sequence number (e.g. (5,3)), and having a map of links +between chains (e.g. (5,3) -> (2,4)) such that A is reachable by B (i.e. A +is in the auth chain of B) if and only if either:

+
    +
  1. A and B have the same chain ID and A's sequence number is less than B's +sequence number; or
  2. +
  3. there is a link L between B's chain ID and A's chain ID such that +L.start_seq_no <= B.seq_no and A.seq_no <= L.end_seq_no.
  4. +
+

There are actually two potential implementations, one where we store links from +each chain to every other reachable chain (the transitive closure of the links +graph), and one where we remove redundant links (the transitive reduction of the +links graph) e.g. if we have chains C3 -> C2 -> C1 then the link C3 -> C1 +would not be stored. Synapse uses the former implementations so that it doesn't +need to recurse to test reachability between chains.

+

Example

+

An example auth graph would look like the following, where chains have been +formed based on type/state_key and are denoted by colour and are labelled with +(chain ID, sequence number). Links are denoted by the arrows (links in grey +are those that would be remove in the second implementation described above).

+

Example

+

Note that we don't include all links between events and their auth events, as +most of those links would be redundant. For example, all events point to the +create event, but each chain only needs the one link from it's base to the +create event.

+

Using the Index

+

This index can be used to calculate the auth chain difference of the state sets +by looking at the chain ID and sequence numbers reachable from each state set:

+
    +
  1. For every state set lookup the chain ID/sequence numbers of each state event
  2. +
  3. Use the index to find all chains and the maximum sequence number reachable +from each state set.
  4. +
  5. The auth chain difference is then all events in each chain that have sequence +numbers between the maximum sequence number reachable from any state set and +the minimum reachable by all state sets (if any).
  6. +
+

Note that steps 2 is effectively calculating the auth chain for each state set +(in terms of chain IDs and sequence numbers), and step 3 is calculating the +difference between the union and intersection of the auth chains.

+

Worked Example

+

For example, given the above graph, we can calculate the difference between +state sets consisting of:

+
    +
  1. S1: Alice's invite (4,1) and Bob's second join (2,2); and
  2. +
  3. S2: Alice's second join (4,3) and Bob's first join (2,1).
  4. +
+

Using the index we see that the following auth chains are reachable from each +state set:

+
    +
  1. S1: (1,1), (2,2), (3,1) & (4,1)
  2. +
  3. S2: (1,1), (2,1), (3,2) & (4,3)
  4. +
+

And so, for each the ranges that are in the auth chain difference:

+
    +
  1. Chain 1: None, (since everything can reach the create event).
  2. +
  3. Chain 2: The range (1, 2] (i.e. just 2), as 1 is reachable by all state +sets and the maximum reachable is 2 (corresponding to Bob's second join).
  4. +
  5. Chain 3: Similarly the range (1, 2] (corresponding to the second power +level).
  6. +
  7. Chain 4: The range (1, 3] (corresponding to both of Alice's joins).
  8. +
+

So the final result is: Bob's second join (2,2), the second power level +(3,2) and both of Alice's joins (4,2) & (4,3).

+

Media Repository

+

Synapse implementation-specific details for the media repository

+

The media repository is where attachments and avatar photos are stored. +It stores attachment content and thumbnails for media uploaded by local users. +It caches attachment content and thumbnails for media uploaded by remote users.

+

Storage

+

Each item of media is assigned a media_id when it is uploaded. +The media_id is a randomly chosen, URL safe 24 character string.

+

Metadata such as the MIME type, upload time and length are stored in the +sqlite3 database indexed by media_id.

+

Content is stored on the filesystem under a "local_content" directory.

+

Thumbnails are stored under a "local_thumbnails" directory.

+

The item with media_id "aabbccccccccdddddddddddd" is stored under +"local_content/aa/bb/ccccccccdddddddddddd". Its thumbnail with width +128 and height 96 and type "image/jpeg" is stored under +"local_thumbnails/aa/bb/ccccccccdddddddddddd/128-96-image-jpeg"

+

Remote content is cached under "remote_content" directory. Each item of +remote content is assigned a local "filesystem_id" to ensure that the +directory structure "remote_content/server_name/aa/bb/ccccccccdddddddddddd" +is appropriate. Thumbnails for remote content are stored under +"remote_thumbnails/server_name/..."

+

Room and User Statistics

+

Synapse maintains room and user statistics (as well as a cache of room state), +in various tables. These can be used for administrative purposes but are also +used when generating the public room directory.

+

Synapse Developer Documentation

+

High-Level Concepts

+

Definitions

+
    +
  • subject: Something we are tracking stats about – currently a room or user.
  • +
  • current row: An entry for a subject in the appropriate current statistics +table. Each subject can have only one.
  • +
  • historical row: An entry for a subject in the appropriate historical +statistics table. Each subject can have any number of these.
  • +
+

Overview

+

Stats are maintained as time series. There are two kinds of column:

+
    +
  • absolute columns – where the value is correct for the time given by end_ts +in the stats row. (Imagine a line graph for these values) +
      +
    • They can also be thought of as 'gauges' in Prometheus, if you are familiar.
    • +
    +
  • +
  • per-slice columns – where the value corresponds to how many of the occurrences +occurred within the time slice given by (end_ts − bucket_size)…end_ts +or start_ts…end_ts. (Imagine a histogram for these values)
  • +
+

Stats are maintained in two tables (for each type): current and historical.

+

Current stats correspond to the present values. Each subject can only have one +entry.

+

Historical stats correspond to values in the past. Subjects may have multiple +entries.

+

Concepts around the management of stats

+

Current rows

+

Current rows contain the most up-to-date statistics for a room. +They only contain absolute columns

+

Historical rows

+

Historical rows can always be considered to be valid for the time slice and +end time specified.

+
    +
  • historical rows will not exist for every time slice – they will be omitted +if there were no changes. In this case, the following assumptions can be +made to interpolate/recreate missing rows: +
      +
    • absolute fields have the same values as in the preceding row
    • +
    • per-slice fields are zero (0)
    • +
    +
  • +
  • historical rows will not be retained forever – rows older than a configurable +time will be purged.
  • +
+

Purge

+

The purging of historical rows is not yet implemented.

+

Deprecation Policy for Platform Dependencies

+

Synapse has a number of platform dependencies, including Python and PostgreSQL. +This document outlines the policy towards which versions we support, and when we +drop support for versions in the future.

+

Policy

+

Synapse follows the upstream support life cycles for Python and PostgreSQL, +i.e. when a version reaches End of Life Synapse will withdraw support for that +version in future releases.

+

Details on the upstream support life cycles for Python and PostgreSQL are +documented at https://endoflife.date/python and +https://endoflife.date/postgresql.

+

Context

+

It is important for system admins to have a clear understanding of the platform +requirements of Synapse and its deprecation policies so that they can +effectively plan upgrading their infrastructure ahead of time. This is +especially important in contexts where upgrading the infrastructure requires +auditing and approval from a security team, or where otherwise upgrading is a +long process.

+

By following the upstream support life cycles Synapse can ensure that its +dependencies continue to get security patches, while not requiring system admins +to constantly update their platform dependencies to the latest versions.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file -- cgit 1.5.1