From f16c6cf59aad485cac83ce9d6d87842ae53adedf Mon Sep 17 00:00:00 2001 From: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com> Date: Wed, 14 Apr 2021 12:06:19 +0100 Subject: Add note to docker docs explaining platform support (#9801) Context is in https://github.com/matrix-org/synapse/issues/9764#issuecomment-818615894. I struggled to find a more official link for this. The problem occurs when using WSL1 instead of WSL2, which some Windows platforms (at least Server 2019) still don't have. Docker have updated their documentation to paint a much happier picture now given WSL2's support. The last sentence here can probably be removed once WSL1 is no longer around... though that will likely not be for a very long time. --- changelog.d/9801.doc | 1 + docker/README.md | 9 ++++++--- 2 files changed, 7 insertions(+), 3 deletions(-) create mode 100644 changelog.d/9801.doc diff --git a/changelog.d/9801.doc b/changelog.d/9801.doc new file mode 100644 index 0000000000..8b8b9d01d4 --- /dev/null +++ b/changelog.d/9801.doc @@ -0,0 +1 @@ +Add a note to the docker docs mentioning that we mirror upstream's supported Docker platforms. diff --git a/docker/README.md b/docker/README.md index 3a7dc585e7..b65bcea636 100644 --- a/docker/README.md +++ b/docker/README.md @@ -2,9 +2,12 @@ This Docker image will run Synapse as a single process. By default it uses a sqlite database; for production use you should connect it to a separate -postgres database. +postgres database. The image also does *not* provide a TURN server. -The image also does *not* provide a TURN server. +This image should work on all platforms that are supported by Docker upstream. +Note that Docker's WS1-backend Linux Containers on Windows +platform is [experimental](https://github.com/docker/for-win/issues/6470) and +is not supported by this image. ## Volumes @@ -208,4 +211,4 @@ healthcheck: ## Using jemalloc Jemalloc is embedded in the image and will be used instead of the default allocator. -You can read about jemalloc by reading the Synapse [README](../README.md) \ No newline at end of file +You can read about jemalloc by reading the Synapse [README](../README.md) -- cgit 1.5.1 From 7e460ec2a566b19bbcda63bc04b1e422127a99b3 Mon Sep 17 00:00:00 2001 From: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com> Date: Wed, 14 Apr 2021 13:54:49 +0100 Subject: Add a dockerfile for running a set of Synapse worker processes (#9162) This PR adds a Dockerfile and some supporting files to the `docker/` directory. The Dockerfile's intention is to spin up a container with: * A Synapse main process. * Any desired worker processes, defined by a `SYNAPSE_WORKERS` environment variable supplied at runtime. * A redis for worker communication. * A nginx for routing traffic. * A supervisord to start all worker processes and monitor them if any go down. Note that **this is not currently intended to be used in production**. If you'd like to use Synapse workers with Docker, instead make use of the official image, with one worker per container. The purpose of this dockerfile is currently to allow testing Synapse in worker mode with the [Complement](https://github.com/matrix-org/complement/) test suite. `configure_workers_and_start.py` is where most of the magic happens in this PR. It reads from environment variables (documented in the file) and creates all necessary config files for the processes. It is the entrypoint of the Dockerfile, and thus is run any time the docker container is spun up, recreating all config files in case you want to use a different set of workers. One can specify which workers they'd like to use by setting the `SYNAPSE_WORKERS` environment variable (as a comma-separated list of arbitrary worker names) or by setting it to `*` for all worker processes. We will be using the latter in CI. Huge thanks to @MatMaul for helping get this all working :tada: This PR is paired with its equivalent on the Complement side: https://github.com/matrix-org/complement/pull/62. Note, for the purpose of testing this PR before it's merged: You'll need to (re)build the base Synapse docker image for everything to work (`matrixdotorg/synapse:latest`). Then build the worker-based docker image on top (`matrixdotorg/synapse:workers`). --- changelog.d/9162.misc | 1 + docker/Dockerfile-workers | 23 ++ docker/README-testing.md | 140 ++++++++ docker/README.md | 12 +- docker/conf-workers/nginx.conf.j2 | 27 ++ docker/conf-workers/shared.yaml.j2 | 9 + docker/conf-workers/supervisord.conf.j2 | 41 +++ docker/conf-workers/worker.yaml.j2 | 26 ++ docker/conf/homeserver.yaml | 4 +- docker/conf/log.config | 32 +- docker/configure_workers_and_start.py | 558 ++++++++++++++++++++++++++++++++ 11 files changed, 867 insertions(+), 6 deletions(-) create mode 100644 changelog.d/9162.misc create mode 100644 docker/Dockerfile-workers create mode 100644 docker/README-testing.md create mode 100644 docker/conf-workers/nginx.conf.j2 create mode 100644 docker/conf-workers/shared.yaml.j2 create mode 100644 docker/conf-workers/supervisord.conf.j2 create mode 100644 docker/conf-workers/worker.yaml.j2 create mode 100755 docker/configure_workers_and_start.py diff --git a/changelog.d/9162.misc b/changelog.d/9162.misc new file mode 100644 index 0000000000..1083da8a7a --- /dev/null +++ b/changelog.d/9162.misc @@ -0,0 +1 @@ +Add a dockerfile for running Synapse in worker-mode under Complement. \ No newline at end of file diff --git a/docker/Dockerfile-workers b/docker/Dockerfile-workers new file mode 100644 index 0000000000..969cf97286 --- /dev/null +++ b/docker/Dockerfile-workers @@ -0,0 +1,23 @@ +# Inherit from the official Synapse docker image +FROM matrixdotorg/synapse + +# Install deps +RUN apt-get update +RUN apt-get install -y supervisor redis nginx + +# Remove the default nginx sites +RUN rm /etc/nginx/sites-enabled/default + +# Copy Synapse worker, nginx and supervisord configuration template files +COPY ./docker/conf-workers/* /conf/ + +# Expose nginx listener port +EXPOSE 8080/tcp + +# Volume for user-editable config files, logs etc. +VOLUME ["/data"] + +# A script to read environment variables and create the necessary +# files to run the desired worker configuration. Will start supervisord. +COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py +ENTRYPOINT ["/configure_workers_and_start.py"] diff --git a/docker/README-testing.md b/docker/README-testing.md new file mode 100644 index 0000000000..6a5baf9e28 --- /dev/null +++ b/docker/README-testing.md @@ -0,0 +1,140 @@ +# Running tests against a dockerised Synapse + +It's possible to run integration tests against Synapse +using [Complement](https://github.com/matrix-org/complement). Complement is a Matrix Spec +compliance test suite for homeservers, and supports any homeserver docker image configured +to listen on ports 8008/8448. This document contains instructions for building Synapse +docker images that can be run inside Complement for testing purposes. + +Note that running Synapse's unit tests from within the docker image is not supported. + +## Testing with SQLite and single-process Synapse + +> Note that `scripts-dev/complement.sh` is a script that will automatically build +> and run an SQLite-based, single-process of Synapse against Complement. + +The instructions below will set up Complement testing for a single-process, +SQLite-based Synapse deployment. + +Start by building the base Synapse docker image. If you wish to run tests with the latest +release of Synapse, instead of your current checkout, you can skip this step. From the +root of the repository: + +```sh +docker build -t matrixdotorg/synapse -f docker/Dockerfile . +``` + +This will build an image with the tag `matrixdotorg/synapse`. + +Next, build the Synapse image for Complement. You will need a local checkout +of Complement. Change to the root of your Complement checkout and run: + +```sh +docker build -t complement-synapse -f "dockerfiles/Synapse.Dockerfile" dockerfiles +``` + +This will build an image with the tag `complement-synapse`, which can be handed to +Complement for testing via the `COMPLEMENT_BASE_IMAGE` environment variable. Refer to +[Complement's documentation](https://github.com/matrix-org/complement/#running) for +how to run the tests, as well as the various available command line flags. + +## Testing with PostgreSQL and single or multi-process Synapse + +The above docker image only supports running Synapse with SQLite and in a +single-process topology. The following instructions are used to build a Synapse image for +Complement that supports either single or multi-process topology with a PostgreSQL +database backend. + +As with the single-process image, build the base Synapse docker image. If you wish to run +tests with the latest release of Synapse, instead of your current checkout, you can skip +this step. From the root of the repository: + +```sh +docker build -t matrixdotorg/synapse -f docker/Dockerfile . +``` + +This will build an image with the tag `matrixdotorg/synapse`. + +Next, we build a new image with worker support based on `matrixdotorg/synapse:latest`. +Again, from the root of the repository: + +```sh +docker build -t matrixdotorg/synapse-workers -f docker/Dockerfile-workers . +``` + +This will build an image with the tag` matrixdotorg/synapse-workers`. + +It's worth noting at this point that this image is fully functional, and +can be used for testing against locally. See instructions for using the container +under +[Running the Dockerfile-worker image standalone](#running-the-dockerfile-worker-image-standalone) +below. + +Finally, build the Synapse image for Complement, which is based on +`matrixdotorg/synapse-workers`. You will need a local checkout of Complement. Change to +the root of your Complement checkout and run: + +```sh +docker build -t matrixdotorg/complement-synapse-workers -f dockerfiles/SynapseWorkers.Dockerfile dockerfiles +``` + +This will build an image with the tag `complement-synapse`, which can be handed to +Complement for testing via the `COMPLEMENT_BASE_IMAGE` environment variable. Refer to +[Complement's documentation](https://github.com/matrix-org/complement/#running) for +how to run the tests, as well as the various available command line flags. + +## Running the Dockerfile-worker image standalone + +For manual testing of a multi-process Synapse instance in Docker, +[Dockerfile-workers](Dockerfile-workers) is a Dockerfile that will produce an image +bundling all necessary components together for a workerised homeserver instance. + +This includes any desired Synapse worker processes, a nginx to route traffic accordingly, +a redis for worker communication and a supervisord instance to start up and monitor all +processes. You will need to provide your own postgres container to connect to, and TLS +is not handled by the container. + +Once you've built the image using the above instructions, you can run it. Be sure +you've set up a volume according to the [usual Synapse docker instructions](README.md). +Then run something along the lines of: + +``` +docker run -d --name synapse \ + --mount type=volume,src=synapse-data,dst=/data \ + -p 8008:8008 \ + -e SYNAPSE_SERVER_NAME=my.matrix.host \ + -e SYNAPSE_REPORT_STATS=no \ + -e POSTGRES_HOST=postgres \ + -e POSTGRES_USER=postgres \ + -e POSTGRES_PASSWORD=somesecret \ + -e SYNAPSE_WORKER_TYPES=synchrotron,media_repository,user_dir \ + -e SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK=1 \ + matrixdotorg/synapse-workers +``` + +...substituting `POSTGRES*` variables for those that match a postgres host you have +available (usually a running postgres docker container). + +The `SYNAPSE_WORKER_TYPES` environment variable is a comma-separated list of workers to +use when running the container. All possible worker names are defined by the keys of the +`WORKERS_CONFIG` variable in [this script](configure_workers_and_start.py), which the +Dockerfile makes use of to generate appropriate worker, nginx and supervisord config +files. + +Sharding is supported for a subset of workers, in line with the +[worker documentation](../docs/workers.md). To run multiple instances of a given worker +type, simply specify the type multiple times in `SYNAPSE_WORKER_TYPES` +(e.g `SYNAPSE_WORKER_TYPES=event_creator,event_creator...`). + +Otherwise, `SYNAPSE_WORKER_TYPES` can either be left empty or unset to spawn no workers +(leaving only the main process). The container is configured to use redis-based worker +mode. + +Logs for workers and the main process are logged to stdout and can be viewed with +standard `docker logs` tooling. Worker logs contain their worker name +after the timestamp. + +Setting `SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK=1` will cause worker logs to be written to +`/logs/.log`. Logs are kept for 1 week and rotate every day at 00: +00, according to the container's clock. Logging for the main process must still be +configured by modifying the homeserver's log config in your Synapse data volume. diff --git a/docker/README.md b/docker/README.md index b65bcea636..a7d1e670fe 100644 --- a/docker/README.md +++ b/docker/README.md @@ -11,7 +11,7 @@ is not supported by this image. ## Volumes -By default, the image expects a single volume, located at ``/data``, that will hold: +By default, the image expects a single volume, located at `/data`, that will hold: * configuration files; * uploaded media and thumbnails; @@ -19,11 +19,11 @@ By default, the image expects a single volume, located at ``/data``, that will h * the appservices configuration. You are free to use separate volumes depending on storage endpoints at your -disposal. For instance, ``/data/media`` could be stored on a large but low +disposal. For instance, `/data/media` could be stored on a large but low performance hdd storage while other files could be stored on high performance endpoints. -In order to setup an application service, simply create an ``appservices`` +In order to setup an application service, simply create an `appservices` directory in the data volume and write the application service Yaml configuration file there. Multiple application services are supported. @@ -56,6 +56,8 @@ The following environment variables are supported in `generate` mode: * `SYNAPSE_SERVER_NAME` (mandatory): the server public hostname. * `SYNAPSE_REPORT_STATS` (mandatory, `yes` or `no`): whether to enable anonymous statistics reporting. +* `SYNAPSE_HTTP_PORT`: the port Synapse should listen on for http traffic. + Defaults to `8008`. * `SYNAPSE_CONFIG_DIR`: where additional config files (such as the log config and event signing key) will be stored. Defaults to `/data`. * `SYNAPSE_CONFIG_PATH`: path to the file to be generated. Defaults to @@ -76,6 +78,8 @@ docker run -d --name synapse \ matrixdotorg/synapse:latest ``` +(assuming 8008 is the port Synapse is configured to listen on for http traffic.) + You can then check that it has started correctly with: ``` @@ -211,4 +215,4 @@ healthcheck: ## Using jemalloc Jemalloc is embedded in the image and will be used instead of the default allocator. -You can read about jemalloc by reading the Synapse [README](../README.md) +You can read about jemalloc by reading the Synapse [README](../README.md). diff --git a/docker/conf-workers/nginx.conf.j2 b/docker/conf-workers/nginx.conf.j2 new file mode 100644 index 0000000000..1081979e06 --- /dev/null +++ b/docker/conf-workers/nginx.conf.j2 @@ -0,0 +1,27 @@ +# This file contains the base config for the reverse proxy, as part of ../Dockerfile-workers. +# configure_workers_and_start.py uses and amends to this file depending on the workers +# that have been selected. + +{{ upstream_directives }} + +server { + # Listen on an unoccupied port number + listen 8008; + listen [::]:8008; + + server_name localhost; + + # Nginx by default only allows file uploads up to 1M in size + # Increase client_max_body_size to match max_upload_size defined in homeserver.yaml + client_max_body_size 100M; + +{{ worker_locations }} + + # Send all other traffic to the main process + location ~* ^(\\/_matrix|\\/_synapse) { + proxy_pass http://localhost:8080; + proxy_set_header X-Forwarded-For $remote_addr; + proxy_set_header X-Forwarded-Proto $scheme; + proxy_set_header Host $host; + } +} diff --git a/docker/conf-workers/shared.yaml.j2 b/docker/conf-workers/shared.yaml.j2 new file mode 100644 index 0000000000..f94b8c6aca --- /dev/null +++ b/docker/conf-workers/shared.yaml.j2 @@ -0,0 +1,9 @@ +# This file contains the base for the shared homeserver config file between Synapse workers, +# as part of ./Dockerfile-workers. +# configure_workers_and_start.py uses and amends to this file depending on the workers +# that have been selected. + +redis: + enabled: true + +{{ shared_worker_config }} \ No newline at end of file diff --git a/docker/conf-workers/supervisord.conf.j2 b/docker/conf-workers/supervisord.conf.j2 new file mode 100644 index 0000000000..0de2c6143b --- /dev/null +++ b/docker/conf-workers/supervisord.conf.j2 @@ -0,0 +1,41 @@ +# This file contains the base config for supervisord, as part of ../Dockerfile-workers. +# configure_workers_and_start.py uses and amends to this file depending on the workers +# that have been selected. +[supervisord] +nodaemon=true +user=root + +[program:nginx] +command=/usr/sbin/nginx -g "daemon off;" +priority=500 +stdout_logfile=/dev/stdout +stdout_logfile_maxbytes=0 +stderr_logfile=/dev/stderr +stderr_logfile_maxbytes=0 +username=www-data +autorestart=true + +[program:redis] +command=/usr/bin/redis-server /etc/redis/redis.conf --daemonize no +priority=1 +stdout_logfile=/dev/stdout +stdout_logfile_maxbytes=0 +stderr_logfile=/dev/stderr +stderr_logfile_maxbytes=0 +username=redis +autorestart=true + +[program:synapse_main] +command=/usr/local/bin/python -m synapse.app.homeserver --config-path="{{ main_config_path }}" --config-path=/conf/workers/shared.yaml +priority=10 +# Log startup failures to supervisord's stdout/err +# Regular synapse logs will still go in the configured data directory +stdout_logfile=/dev/stdout +stdout_logfile_maxbytes=0 +stderr_logfile=/dev/stderr +stderr_logfile_maxbytes=0 +autorestart=unexpected +exitcodes=0 + +# Additional process blocks +{{ worker_config }} \ No newline at end of file diff --git a/docker/conf-workers/worker.yaml.j2 b/docker/conf-workers/worker.yaml.j2 new file mode 100644 index 0000000000..42131afc95 --- /dev/null +++ b/docker/conf-workers/worker.yaml.j2 @@ -0,0 +1,26 @@ +# This is a configuration template for a single worker instance, and is +# used by Dockerfile-workers. +# Values will be change depending on whichever workers are selected when +# running that image. + +worker_app: "{{ app }}" +worker_name: "{{ name }}" + +# The replication listener on the main synapse process. +worker_replication_host: 127.0.0.1 +worker_replication_http_port: 9093 + +worker_listeners: + - type: http + port: {{ port }} +{% if listener_resources %} + resources: + - names: +{%- for resource in listener_resources %} + - {{ resource }} +{%- endfor %} +{% endif %} + +worker_log_config: {{ worker_log_config_filepath }} + +{{ worker_extra_conf }} diff --git a/docker/conf/homeserver.yaml b/docker/conf/homeserver.yaml index a792899540..2b23d7f428 100644 --- a/docker/conf/homeserver.yaml +++ b/docker/conf/homeserver.yaml @@ -40,7 +40,9 @@ listeners: compress: false {% endif %} - - port: 8008 + # Allow configuring in case we want to reverse proxy 8008 + # using another process in the same container + - port: {{ SYNAPSE_HTTP_PORT or 8008 }} tls: false bind_addresses: ['::'] type: http diff --git a/docker/conf/log.config b/docker/conf/log.config index 491bbcc87a..34572bc0f3 100644 --- a/docker/conf/log.config +++ b/docker/conf/log.config @@ -2,9 +2,34 @@ version: 1 formatters: precise: - format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s' +{% if worker_name %} + format: '%(asctime)s - worker:{{ worker_name }} - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s' +{% else %} + format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s' +{% endif %} handlers: + file: + class: logging.handlers.TimedRotatingFileHandler + formatter: precise + filename: {{ LOG_FILE_PATH or "homeserver.log" }} + when: "midnight" + backupCount: 6 # Does not include the current log file. + encoding: utf8 + + # Default to buffering writes to log file for efficiency. This means that + # there will be a delay for INFO/DEBUG logs to get written, but WARNING/ERROR + # logs will still be flushed immediately. + buffer: + class: logging.handlers.MemoryHandler + target: file + # The capacity is the number of log lines that are buffered before + # being written to disk. Increasing this will lead to better + # performance, at the expensive of it taking longer for log lines to + # be written to disk. + capacity: 10 + flushLevel: 30 # Flush for WARNING logs as well + console: class: logging.StreamHandler formatter: precise @@ -17,6 +42,11 @@ loggers: root: level: {{ SYNAPSE_LOG_LEVEL or "INFO" }} + +{% if LOG_FILE_PATH %} + handlers: [console, buffer] +{% else %} handlers: [console] +{% endif %} disable_existing_loggers: false diff --git a/docker/configure_workers_and_start.py b/docker/configure_workers_and_start.py new file mode 100755 index 0000000000..4be6afc65d --- /dev/null +++ b/docker/configure_workers_and_start.py @@ -0,0 +1,558 @@ +#!/usr/bin/env python +# Copyright 2021 The Matrix.org Foundation C.I.C. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This script reads environment variables and generates a shared Synapse worker, +# nginx and supervisord configs depending on the workers requested. +# +# The environment variables it reads are: +# * SYNAPSE_SERVER_NAME: The desired server_name of the homeserver. +# * SYNAPSE_REPORT_STATS: Whether to report stats. +# * SYNAPSE_WORKER_TYPES: A comma separated list of worker names as specified in WORKER_CONFIG +# below. Leave empty for no workers, or set to '*' for all possible workers. +# +# NOTE: According to Complement's ENTRYPOINT expectations for a homeserver image (as defined +# in the project's README), this script may be run multiple times, and functionality should +# continue to work if so. + +import os +import subprocess +import sys + +import jinja2 +import yaml + +MAIN_PROCESS_HTTP_LISTENER_PORT = 8080 + + +WORKERS_CONFIG = { + "pusher": { + "app": "synapse.app.pusher", + "listener_resources": [], + "endpoint_patterns": [], + "shared_extra_conf": {"start_pushers": False}, + "worker_extra_conf": "", + }, + "user_dir": { + "app": "synapse.app.user_dir", + "listener_resources": ["client"], + "endpoint_patterns": [ + "^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$" + ], + "shared_extra_conf": {"update_user_directory": False}, + "worker_extra_conf": "", + }, + "media_repository": { + "app": "synapse.app.media_repository", + "listener_resources": ["media"], + "endpoint_patterns": [ + "^/_matrix/media/", + "^/_synapse/admin/v1/purge_media_cache$", + "^/_synapse/admin/v1/room/.*/media.*$", + "^/_synapse/admin/v1/user/.*/media.*$", + "^/_synapse/admin/v1/media/.*$", + "^/_synapse/admin/v1/quarantine_media/.*$", + ], + "shared_extra_conf": {"enable_media_repo": False}, + "worker_extra_conf": "enable_media_repo: true", + }, + "appservice": { + "app": "synapse.app.appservice", + "listener_resources": [], + "endpoint_patterns": [], + "shared_extra_conf": {"notify_appservices": False}, + "worker_extra_conf": "", + }, + "federation_sender": { + "app": "synapse.app.federation_sender", + "listener_resources": [], + "endpoint_patterns": [], + "shared_extra_conf": {"send_federation": False}, + "worker_extra_conf": "", + }, + "synchrotron": { + "app": "synapse.app.generic_worker", + "listener_resources": ["client"], + "endpoint_patterns": [ + "^/_matrix/client/(v2_alpha|r0)/sync$", + "^/_matrix/client/(api/v1|v2_alpha|r0)/events$", + "^/_matrix/client/(api/v1|r0)/initialSync$", + "^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$", + ], + "shared_extra_conf": {}, + "worker_extra_conf": "", + }, + "federation_reader": { + "app": "synapse.app.generic_worker", + "listener_resources": ["federation"], + "endpoint_patterns": [ + "^/_matrix/federation/(v1|v2)/event/", + "^/_matrix/federation/(v1|v2)/state/", + "^/_matrix/federation/(v1|v2)/state_ids/", + "^/_matrix/federation/(v1|v2)/backfill/", + "^/_matrix/federation/(v1|v2)/get_missing_events/", + "^/_matrix/federation/(v1|v2)/publicRooms", + "^/_matrix/federation/(v1|v2)/query/", + "^/_matrix/federation/(v1|v2)/make_join/", + "^/_matrix/federation/(v1|v2)/make_leave/", + "^/_matrix/federation/(v1|v2)/send_join/", + "^/_matrix/federation/(v1|v2)/send_leave/", + "^/_matrix/federation/(v1|v2)/invite/", + "^/_matrix/federation/(v1|v2)/query_auth/", + "^/_matrix/federation/(v1|v2)/event_auth/", + "^/_matrix/federation/(v1|v2)/exchange_third_party_invite/", + "^/_matrix/federation/(v1|v2)/user/devices/", + "^/_matrix/federation/(v1|v2)/get_groups_publicised$", + "^/_matrix/key/v2/query", + ], + "shared_extra_conf": {}, + "worker_extra_conf": "", + }, + "federation_inbound": { + "app": "synapse.app.generic_worker", + "listener_resources": ["federation"], + "endpoint_patterns": ["/_matrix/federation/(v1|v2)/send/"], + "shared_extra_conf": {}, + "worker_extra_conf": "", + }, + "event_persister": { + "app": "synapse.app.generic_worker", + "listener_resources": ["replication"], + "endpoint_patterns": [], + "shared_extra_conf": {}, + "worker_extra_conf": "", + }, + "background_worker": { + "app": "synapse.app.generic_worker", + "listener_resources": [], + "endpoint_patterns": [], + # This worker cannot be sharded. Therefore there should only ever be one background + # worker, and it should be named background_worker1 + "shared_extra_conf": {"run_background_tasks_on": "background_worker1"}, + "worker_extra_conf": "", + }, + "event_creator": { + "app": "synapse.app.generic_worker", + "listener_resources": ["client"], + "endpoint_patterns": [ + "^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact", + "^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send", + "^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$", + "^/_matrix/client/(api/v1|r0|unstable)/join/", + "^/_matrix/client/(api/v1|r0|unstable)/profile/", + ], + "shared_extra_conf": {}, + "worker_extra_conf": "", + }, + "frontend_proxy": { + "app": "synapse.app.frontend_proxy", + "listener_resources": ["client", "replication"], + "endpoint_patterns": ["^/_matrix/client/(api/v1|r0|unstable)/keys/upload"], + "shared_extra_conf": {}, + "worker_extra_conf": ( + "worker_main_http_uri: http://127.0.0.1:%d" + % (MAIN_PROCESS_HTTP_LISTENER_PORT,), + ), + }, +} + +# Templates for sections that may be inserted multiple times in config files +SUPERVISORD_PROCESS_CONFIG_BLOCK = """ +[program:synapse_{name}] +command=/usr/local/bin/python -m {app} \ + --config-path="{config_path}" \ + --config-path=/conf/workers/shared.yaml \ + --config-path=/conf/workers/{name}.yaml +autorestart=unexpected +priority=500 +exitcodes=0 +stdout_logfile=/dev/stdout +stdout_logfile_maxbytes=0 +stderr_logfile=/dev/stderr +stderr_logfile_maxbytes=0 +""" + +NGINX_LOCATION_CONFIG_BLOCK = """ + location ~* {endpoint} { + proxy_pass {upstream}; + proxy_set_header X-Forwarded-For $remote_addr; + proxy_set_header X-Forwarded-Proto $scheme; + proxy_set_header Host $host; + } +""" + +NGINX_UPSTREAM_CONFIG_BLOCK = """ +upstream {upstream_worker_type} { +{body} +} +""" + + +# Utility functions +def log(txt: str): + """Log something to the stdout. + + Args: + txt: The text to log. + """ + print(txt) + + +def error(txt: str): + """Log something and exit with an error code. + + Args: + txt: The text to log in error. + """ + log(txt) + sys.exit(2) + + +def convert(src: str, dst: str, **template_vars): + """Generate a file from a template + + Args: + src: Path to the input file. + dst: Path to write to. + template_vars: The arguments to replace placeholder variables in the template with. + """ + # Read the template file + with open(src) as infile: + template = infile.read() + + # Generate a string from the template. We disable autoescape to prevent template + # variables from being escaped. + rendered = jinja2.Template(template, autoescape=False).render(**template_vars) + + # Write the generated contents to a file + # + # We use append mode in case the files have already been written to by something else + # (for instance, as part of the instructions in a dockerfile). + with open(dst, "a") as outfile: + # In case the existing file doesn't end with a newline + outfile.write("\n") + + outfile.write(rendered) + + +def add_sharding_to_shared_config( + shared_config: dict, + worker_type: str, + worker_name: str, + worker_port: int, +) -> None: + """Given a dictionary representing a config file shared across all workers, + append sharded worker information to it for the current worker_type instance. + + Args: + shared_config: The config dict that all worker instances share (after being converted to YAML) + worker_type: The type of worker (one of those defined in WORKERS_CONFIG). + worker_name: The name of the worker instance. + worker_port: The HTTP replication port that the worker instance is listening on. + """ + # The instance_map config field marks the workers that write to various replication streams + instance_map = shared_config.setdefault("instance_map", {}) + + # Worker-type specific sharding config + if worker_type == "pusher": + shared_config.setdefault("pusher_instances", []).append(worker_name) + + elif worker_type == "federation_sender": + shared_config.setdefault("federation_sender_instances", []).append(worker_name) + + elif worker_type == "event_persister": + # Event persisters write to the events stream, so we need to update + # the list of event stream writers + shared_config.setdefault("stream_writers", {}).setdefault("events", []).append( + worker_name + ) + + # Map of stream writer instance names to host/ports combos + instance_map[worker_name] = { + "host": "localhost", + "port": worker_port, + } + + elif worker_type == "media_repository": + # The first configured media worker will run the media background jobs + shared_config.setdefault("media_instance_running_background_jobs", worker_name) + + +def generate_base_homeserver_config(): + """Starts Synapse and generates a basic homeserver config, which will later be + modified for worker support. + + Raises: CalledProcessError if calling start.py returned a non-zero exit code. + """ + # start.py already does this for us, so just call that. + # note that this script is copied in in the official, monolith dockerfile + os.environ["SYNAPSE_HTTP_PORT"] = str(MAIN_PROCESS_HTTP_LISTENER_PORT) + subprocess.check_output(["/usr/local/bin/python", "/start.py", "migrate_config"]) + + +def generate_worker_files(environ, config_path: str, data_dir: str): + """Read the desired list of workers from environment variables and generate + shared homeserver, nginx and supervisord configs. + + Args: + environ: _Environ[str] + config_path: Where to output the generated Synapse main worker config file. + data_dir: The location of the synapse data directory. Where log and + user-facing config files live. + """ + # Note that yaml cares about indentation, so care should be taken to insert lines + # into files at the correct indentation below. + + # shared_config is the contents of a Synapse config file that will be shared amongst + # the main Synapse process as well as all workers. + # It is intended mainly for disabling functionality when certain workers are spun up, + # and adding a replication listener. + + # First read the original config file and extract the listeners block. Then we'll add + # another listener for replication. Later we'll write out the result. + listeners = [ + { + "port": 9093, + "bind_address": "127.0.0.1", + "type": "http", + "resources": [{"names": ["replication"]}], + } + ] + with open(config_path) as file_stream: + original_config = yaml.safe_load(file_stream) + original_listeners = original_config.get("listeners") + if original_listeners: + listeners += original_listeners + + # The shared homeserver config. The contents of which will be inserted into the + # base shared worker jinja2 template. + # + # This config file will be passed to all workers, included Synapse's main process. + shared_config = {"listeners": listeners} + + # The supervisord config. The contents of which will be inserted into the + # base supervisord jinja2 template. + # + # Supervisord will be in charge of running everything, from redis to nginx to Synapse + # and all of its worker processes. Load the config template, which defines a few + # services that are necessary to run. + supervisord_config = "" + + # Upstreams for load-balancing purposes. This dict takes the form of a worker type to the + # ports of each worker. For example: + # { + # worker_type: {1234, 1235, ...}} + # } + # and will be used to construct 'upstream' nginx directives. + nginx_upstreams = {} + + # A map of: {"endpoint": "upstream"}, where "upstream" is a str representing what will be + # placed after the proxy_pass directive. The main benefit to representing this data as a + # dict over a str is that we can easily deduplicate endpoints across multiple instances + # of the same worker. + # + # An nginx site config that will be amended to depending on the workers that are + # spun up. To be placed in /etc/nginx/conf.d. + nginx_locations = {} + + # Read the desired worker configuration from the environment + worker_types = environ.get("SYNAPSE_WORKER_TYPES") + if worker_types is None: + # No workers, just the main process + worker_types = [] + else: + # Split type names by comma + worker_types = worker_types.split(",") + + # Create the worker configuration directory if it doesn't already exist + os.makedirs("/conf/workers", exist_ok=True) + + # Start worker ports from this arbitrary port + worker_port = 18009 + + # A counter of worker_type -> int. Used for determining the name for a given + # worker type when generating its config file, as each worker's name is just + # worker_type + instance # + worker_type_counter = {} + + # For each worker type specified by the user, create config values + for worker_type in worker_types: + worker_type = worker_type.strip() + + worker_config = WORKERS_CONFIG.get(worker_type) + if worker_config: + worker_config = worker_config.copy() + else: + log(worker_type + " is an unknown worker type! It will be ignored") + continue + + new_worker_count = worker_type_counter.setdefault(worker_type, 0) + 1 + worker_type_counter[worker_type] = new_worker_count + + # Name workers by their type concatenated with an incrementing number + # e.g. federation_reader1 + worker_name = worker_type + str(new_worker_count) + worker_config.update( + {"name": worker_name, "port": worker_port, "config_path": config_path} + ) + + # Update the shared config with any worker-type specific options + shared_config.update(worker_config["shared_extra_conf"]) + + # Check if more than one instance of this worker type has been specified + worker_type_total_count = worker_types.count(worker_type) + if worker_type_total_count > 1: + # Update the shared config with sharding-related options if necessary + add_sharding_to_shared_config( + shared_config, worker_type, worker_name, worker_port + ) + + # Enable the worker in supervisord + supervisord_config += SUPERVISORD_PROCESS_CONFIG_BLOCK.format_map(worker_config) + + # Add nginx location blocks for this worker's endpoints (if any are defined) + for pattern in worker_config["endpoint_patterns"]: + # Determine whether we need to load-balance this worker + if worker_type_total_count > 1: + # Create or add to a load-balanced upstream for this worker + nginx_upstreams.setdefault(worker_type, set()).add(worker_port) + + # Upstreams are named after the worker_type + upstream = "http://" + worker_type + else: + upstream = "http://localhost:%d" % (worker_port,) + + # Note that this endpoint should proxy to this upstream + nginx_locations[pattern] = upstream + + # Write out the worker's logging config file + + # Check whether we should write worker logs to disk, in addition to the console + extra_log_template_args = {} + if environ.get("SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK"): + extra_log_template_args["LOG_FILE_PATH"] = "{dir}/logs/{name}.log".format( + dir=data_dir, name=worker_name + ) + + # Render and write the file + log_config_filepath = "/conf/workers/{name}.log.config".format(name=worker_name) + convert( + "/conf/log.config", + log_config_filepath, + worker_name=worker_name, + **extra_log_template_args, + ) + + # Then a worker config file + convert( + "/conf/worker.yaml.j2", + "/conf/workers/{name}.yaml".format(name=worker_name), + **worker_config, + worker_log_config_filepath=log_config_filepath, + ) + + worker_port += 1 + + # Build the nginx location config blocks + nginx_location_config = "" + for endpoint, upstream in nginx_locations.items(): + nginx_location_config += NGINX_LOCATION_CONFIG_BLOCK.format( + endpoint=endpoint, + upstream=upstream, + ) + + # Determine the load-balancing upstreams to configure + nginx_upstream_config = "" + for upstream_worker_type, upstream_worker_ports in nginx_upstreams.items(): + body = "" + for port in upstream_worker_ports: + body += " server localhost:%d;\n" % (port,) + + # Add to the list of configured upstreams + nginx_upstream_config += NGINX_UPSTREAM_CONFIG_BLOCK.format( + upstream_worker_type=upstream_worker_type, + body=body, + ) + + # Finally, we'll write out the config files. + + # Shared homeserver config + convert( + "/conf/shared.yaml.j2", + "/conf/workers/shared.yaml", + shared_worker_config=yaml.dump(shared_config), + ) + + # Nginx config + convert( + "/conf/nginx.conf.j2", + "/etc/nginx/conf.d/matrix-synapse.conf", + worker_locations=nginx_location_config, + upstream_directives=nginx_upstream_config, + ) + + # Supervisord config + convert( + "/conf/supervisord.conf.j2", + "/etc/supervisor/conf.d/supervisord.conf", + main_config_path=config_path, + worker_config=supervisord_config, + ) + + # Ensure the logging directory exists + log_dir = data_dir + "/logs" + if not os.path.exists(log_dir): + os.mkdir(log_dir) + + +def start_supervisord(): + """Starts up supervisord which then starts and monitors all other necessary processes + + Raises: CalledProcessError if calling start.py return a non-zero exit code. + """ + subprocess.run(["/usr/bin/supervisord"], stdin=subprocess.PIPE) + + +def main(args, environ): + config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data") + config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml") + data_dir = environ.get("SYNAPSE_DATA_DIR", "/data") + + # override SYNAPSE_NO_TLS, we don't support TLS in worker mode, + # this needs to be handled by a frontend proxy + environ["SYNAPSE_NO_TLS"] = "yes" + + # Generate the base homeserver config if one does not yet exist + if not os.path.exists(config_path): + log("Generating base homeserver config") + generate_base_homeserver_config() + + # This script may be run multiple times (mostly by Complement, see note at top of file). + # Don't re-configure workers in this instance. + mark_filepath = "/conf/workers_have_been_configured" + if not os.path.exists(mark_filepath): + # Always regenerate all other config files + generate_worker_files(environ, config_path, data_dir) + + # Mark workers as being configured + with open(mark_filepath, "w") as f: + f.write("") + + # Start supervisord, which will start Synapse, all of the configured worker + # processes, redis, nginx etc. according to the config we created above. + start_supervisord() + + +if __name__ == "__main__": + main(sys.argv, os.environ) -- cgit 1.5.1 From 4b965c862dc66c0da5d3240add70e9b5f0aa720b Mon Sep 17 00:00:00 2001 From: Jonathan de Jong Date: Wed, 14 Apr 2021 16:34:27 +0200 Subject: Remove redundant "coding: utf-8" lines (#9786) Part of #9744 Removes all redundant `# -*- coding: utf-8 -*-` lines from files, as python 3 automatically reads source code as utf-8 now. `Signed-off-by: Jonathan de Jong ` --- .buildkite/scripts/create_postgres_db.py | 1 - changelog.d/9786.misc | 1 + contrib/cmdclient/http.py | 1 - contrib/experiments/test_messaging.py | 1 - scripts-dev/mypy_synapse_plugin.py | 1 - scripts-dev/sign_json | 1 - scripts-dev/update_database | 1 - scripts/export_signing_key | 1 - scripts/generate_log_config | 1 - scripts/generate_signing_key.py | 1 - scripts/move_remote_media_to_new_store.py | 1 - scripts/register_new_matrix_user | 1 - scripts/synapse_port_db | 1 - stubs/frozendict.pyi | 1 - stubs/txredisapi.pyi | 1 - synapse/__init__.py | 1 - synapse/_scripts/register_new_matrix_user.py | 1 - synapse/api/__init__.py | 1 - synapse/api/auth.py | 1 - synapse/api/auth_blocking.py | 1 - synapse/api/constants.py | 1 - synapse/api/errors.py | 1 - synapse/api/filtering.py | 1 - synapse/api/presence.py | 1 - synapse/api/room_versions.py | 1 - synapse/api/urls.py | 1 - synapse/app/__init__.py | 1 - synapse/app/_base.py | 1 - synapse/app/admin_cmd.py | 1 - synapse/app/appservice.py | 1 - synapse/app/client_reader.py | 1 - synapse/app/event_creator.py | 1 - synapse/app/federation_reader.py | 1 - synapse/app/federation_sender.py | 1 - synapse/app/frontend_proxy.py | 1 - synapse/app/generic_worker.py | 1 - synapse/app/homeserver.py | 1 - synapse/app/media_repository.py | 1 - synapse/app/pusher.py | 1 - synapse/app/synchrotron.py | 1 - synapse/app/user_dir.py | 1 - synapse/appservice/__init__.py | 1 - synapse/appservice/api.py | 1 - synapse/appservice/scheduler.py | 1 - synapse/config/__init__.py | 1 - synapse/config/__main__.py | 1 - synapse/config/_base.py | 1 - synapse/config/_util.py | 1 - synapse/config/auth.py | 1 - synapse/config/cache.py | 1 - synapse/config/cas.py | 1 - synapse/config/consent_config.py | 1 - synapse/config/database.py | 1 - synapse/config/emailconfig.py | 1 - synapse/config/experimental.py | 1 - synapse/config/federation.py | 1 - synapse/config/groups.py | 1 - synapse/config/homeserver.py | 1 - synapse/config/jwt_config.py | 1 - synapse/config/key.py | 1 - synapse/config/logger.py | 1 - synapse/config/metrics.py | 1 - synapse/config/oidc_config.py | 1 - synapse/config/password_auth_providers.py | 1 - synapse/config/push.py | 1 - synapse/config/redis.py | 1 - synapse/config/registration.py | 1 - synapse/config/repository.py | 1 - synapse/config/room.py | 1 - synapse/config/room_directory.py | 1 - synapse/config/saml2_config.py | 1 - synapse/config/server.py | 1 - synapse/config/server_notices_config.py | 1 - synapse/config/spam_checker.py | 1 - synapse/config/sso.py | 1 - synapse/config/stats.py | 1 - synapse/config/third_party_event_rules.py | 1 - synapse/config/tls.py | 1 - synapse/config/tracer.py | 1 - synapse/config/user_directory.py | 1 - synapse/config/workers.py | 1 - synapse/crypto/__init__.py | 1 - synapse/crypto/event_signing.py | 1 - synapse/crypto/keyring.py | 1 - synapse/event_auth.py | 1 - synapse/events/__init__.py | 1 - synapse/events/builder.py | 1 - synapse/events/presence_router.py | 1 - synapse/events/snapshot.py | 1 - synapse/events/spamcheck.py | 1 - synapse/events/third_party_rules.py | 1 - synapse/events/utils.py | 1 - synapse/events/validator.py | 1 - synapse/federation/__init__.py | 1 - synapse/federation/federation_base.py | 1 - synapse/federation/federation_client.py | 1 - synapse/federation/federation_server.py | 1 - synapse/federation/persistence.py | 1 - synapse/federation/send_queue.py | 1 - synapse/federation/sender/__init__.py | 1 - synapse/federation/sender/per_destination_queue.py | 1 - synapse/federation/sender/transaction_manager.py | 1 - synapse/federation/transport/__init__.py | 1 - synapse/federation/transport/client.py | 1 - synapse/federation/transport/server.py | 1 - synapse/federation/units.py | 1 - synapse/groups/attestations.py | 1 - synapse/groups/groups_server.py | 1 - synapse/handlers/__init__.py | 1 - synapse/handlers/_base.py | 1 - synapse/handlers/account_data.py | 1 - synapse/handlers/account_validity.py | 1 - synapse/handlers/acme.py | 1 - synapse/handlers/acme_issuing_service.py | 1 - synapse/handlers/admin.py | 1 - synapse/handlers/appservice.py | 1 - synapse/handlers/auth.py | 1 - synapse/handlers/cas_handler.py | 1 - synapse/handlers/deactivate_account.py | 1 - synapse/handlers/device.py | 1 - synapse/handlers/devicemessage.py | 1 - synapse/handlers/directory.py | 1 - synapse/handlers/e2e_keys.py | 1 - synapse/handlers/e2e_room_keys.py | 1 - synapse/handlers/events.py | 1 - synapse/handlers/federation.py | 1 - synapse/handlers/groups_local.py | 1 - synapse/handlers/identity.py | 1 - synapse/handlers/initial_sync.py | 1 - synapse/handlers/message.py | 1 - synapse/handlers/oidc_handler.py | 1 - synapse/handlers/pagination.py | 1 - synapse/handlers/password_policy.py | 1 - synapse/handlers/presence.py | 1 - synapse/handlers/profile.py | 1 - synapse/handlers/read_marker.py | 1 - synapse/handlers/receipts.py | 1 - synapse/handlers/register.py | 1 - synapse/handlers/room.py | 1 - synapse/handlers/room_list.py | 1 - synapse/handlers/room_member.py | 1 - synapse/handlers/room_member_worker.py | 1 - synapse/handlers/saml_handler.py | 1 - synapse/handlers/search.py | 1 - synapse/handlers/set_password.py | 1 - synapse/handlers/space_summary.py | 1 - synapse/handlers/sso.py | 1 - synapse/handlers/state_deltas.py | 1 - synapse/handlers/stats.py | 1 - synapse/handlers/sync.py | 1 - synapse/handlers/typing.py | 1 - synapse/handlers/ui_auth/__init__.py | 1 - synapse/handlers/ui_auth/checkers.py | 1 - synapse/handlers/user_directory.py | 1 - synapse/http/__init__.py | 1 - synapse/http/additional_resource.py | 1 - synapse/http/client.py | 1 - synapse/http/connectproxyclient.py | 1 - synapse/http/federation/__init__.py | 1 - synapse/http/federation/matrix_federation_agent.py | 1 - synapse/http/federation/srv_resolver.py | 1 - synapse/http/federation/well_known_resolver.py | 1 - synapse/http/matrixfederationclient.py | 1 - synapse/http/proxyagent.py | 1 - synapse/http/request_metrics.py | 1 - synapse/http/server.py | 1 - synapse/http/servlet.py | 1 - synapse/logging/__init__.py | 1 - synapse/logging/_remote.py | 1 - synapse/logging/_structured.py | 1 - synapse/logging/_terse_json.py | 1 - synapse/logging/filter.py | 1 - synapse/logging/formatter.py | 1 - synapse/logging/opentracing.py | 1 - synapse/logging/scopecontextmanager.py | 1 - synapse/logging/utils.py | 1 - synapse/metrics/__init__.py | 1 - synapse/metrics/_exposition.py | 1 - synapse/metrics/background_process_metrics.py | 1 - synapse/module_api/__init__.py | 1 - synapse/module_api/errors.py | 1 - synapse/notifier.py | 1 - synapse/push/__init__.py | 1 - synapse/push/action_generator.py | 1 - synapse/push/bulk_push_rule_evaluator.py | 1 - synapse/push/clientformat.py | 1 - synapse/push/emailpusher.py | 1 - synapse/push/httppusher.py | 1 - synapse/push/mailer.py | 1 - synapse/push/presentable_names.py | 1 - synapse/push/push_rule_evaluator.py | 1 - synapse/push/push_tools.py | 1 - synapse/push/pusher.py | 1 - synapse/push/pusherpool.py | 1 - synapse/replication/__init__.py | 1 - synapse/replication/http/__init__.py | 1 - synapse/replication/http/_base.py | 1 - synapse/replication/http/account_data.py | 1 - synapse/replication/http/devices.py | 1 - synapse/replication/http/federation.py | 1 - synapse/replication/http/login.py | 1 - synapse/replication/http/membership.py | 1 - synapse/replication/http/presence.py | 1 - synapse/replication/http/push.py | 1 - synapse/replication/http/register.py | 1 - synapse/replication/http/send_event.py | 1 - synapse/replication/http/streams.py | 1 - synapse/replication/slave/__init__.py | 1 - synapse/replication/slave/storage/__init__.py | 1 - synapse/replication/slave/storage/_base.py | 1 - synapse/replication/slave/storage/_slaved_id_tracker.py | 1 - synapse/replication/slave/storage/account_data.py | 1 - synapse/replication/slave/storage/appservice.py | 1 - synapse/replication/slave/storage/client_ips.py | 1 - synapse/replication/slave/storage/deviceinbox.py | 1 - synapse/replication/slave/storage/devices.py | 1 - synapse/replication/slave/storage/directory.py | 1 - synapse/replication/slave/storage/events.py | 1 - synapse/replication/slave/storage/filtering.py | 1 - synapse/replication/slave/storage/groups.py | 1 - synapse/replication/slave/storage/keys.py | 1 - synapse/replication/slave/storage/presence.py | 1 - synapse/replication/slave/storage/profile.py | 1 - synapse/replication/slave/storage/push_rule.py | 1 - synapse/replication/slave/storage/pushers.py | 1 - synapse/replication/slave/storage/receipts.py | 1 - synapse/replication/slave/storage/registration.py | 1 - synapse/replication/slave/storage/room.py | 1 - synapse/replication/slave/storage/transactions.py | 1 - synapse/replication/tcp/__init__.py | 1 - synapse/replication/tcp/client.py | 1 - synapse/replication/tcp/commands.py | 1 - synapse/replication/tcp/external_cache.py | 1 - synapse/replication/tcp/handler.py | 1 - synapse/replication/tcp/protocol.py | 1 - synapse/replication/tcp/redis.py | 1 - synapse/replication/tcp/resource.py | 1 - synapse/replication/tcp/streams/__init__.py | 1 - synapse/replication/tcp/streams/_base.py | 1 - synapse/replication/tcp/streams/events.py | 1 - synapse/replication/tcp/streams/federation.py | 1 - synapse/rest/__init__.py | 1 - synapse/rest/admin/__init__.py | 1 - synapse/rest/admin/_base.py | 1 - synapse/rest/admin/devices.py | 1 - synapse/rest/admin/event_reports.py | 1 - synapse/rest/admin/groups.py | 1 - synapse/rest/admin/media.py | 1 - synapse/rest/admin/purge_room_servlet.py | 1 - synapse/rest/admin/rooms.py | 1 - synapse/rest/admin/server_notice_servlet.py | 1 - synapse/rest/admin/statistics.py | 1 - synapse/rest/admin/users.py | 1 - synapse/rest/client/__init__.py | 1 - synapse/rest/client/transactions.py | 1 - synapse/rest/client/v1/__init__.py | 1 - synapse/rest/client/v1/directory.py | 1 - synapse/rest/client/v1/events.py | 1 - synapse/rest/client/v1/initial_sync.py | 1 - synapse/rest/client/v1/login.py | 1 - synapse/rest/client/v1/logout.py | 1 - synapse/rest/client/v1/presence.py | 1 - synapse/rest/client/v1/profile.py | 1 - synapse/rest/client/v1/push_rule.py | 1 - synapse/rest/client/v1/pusher.py | 1 - synapse/rest/client/v1/room.py | 1 - synapse/rest/client/v1/voip.py | 1 - synapse/rest/client/v2_alpha/__init__.py | 1 - synapse/rest/client/v2_alpha/_base.py | 1 - synapse/rest/client/v2_alpha/account.py | 1 - synapse/rest/client/v2_alpha/account_data.py | 1 - synapse/rest/client/v2_alpha/account_validity.py | 1 - synapse/rest/client/v2_alpha/auth.py | 1 - synapse/rest/client/v2_alpha/capabilities.py | 1 - synapse/rest/client/v2_alpha/devices.py | 1 - synapse/rest/client/v2_alpha/filter.py | 1 - synapse/rest/client/v2_alpha/groups.py | 1 - synapse/rest/client/v2_alpha/keys.py | 1 - synapse/rest/client/v2_alpha/notifications.py | 1 - synapse/rest/client/v2_alpha/openid.py | 1 - synapse/rest/client/v2_alpha/password_policy.py | 1 - synapse/rest/client/v2_alpha/read_marker.py | 1 - synapse/rest/client/v2_alpha/receipts.py | 1 - synapse/rest/client/v2_alpha/register.py | 1 - synapse/rest/client/v2_alpha/relations.py | 1 - synapse/rest/client/v2_alpha/report_event.py | 1 - synapse/rest/client/v2_alpha/room_keys.py | 1 - synapse/rest/client/v2_alpha/room_upgrade_rest_servlet.py | 1 - synapse/rest/client/v2_alpha/sendtodevice.py | 1 - synapse/rest/client/v2_alpha/shared_rooms.py | 1 - synapse/rest/client/v2_alpha/sync.py | 1 - synapse/rest/client/v2_alpha/tags.py | 1 - synapse/rest/client/v2_alpha/thirdparty.py | 1 - synapse/rest/client/v2_alpha/tokenrefresh.py | 1 - synapse/rest/client/v2_alpha/user_directory.py | 1 - synapse/rest/client/versions.py | 1 - synapse/rest/consent/consent_resource.py | 1 - synapse/rest/health.py | 1 - synapse/rest/key/__init__.py | 1 - synapse/rest/key/v2/__init__.py | 1 - synapse/rest/key/v2/local_key_resource.py | 1 - synapse/rest/media/v1/__init__.py | 1 - synapse/rest/media/v1/_base.py | 1 - synapse/rest/media/v1/config_resource.py | 1 - synapse/rest/media/v1/download_resource.py | 1 - synapse/rest/media/v1/filepath.py | 1 - synapse/rest/media/v1/media_repository.py | 1 - synapse/rest/media/v1/media_storage.py | 1 - synapse/rest/media/v1/preview_url_resource.py | 1 - synapse/rest/media/v1/storage_provider.py | 1 - synapse/rest/media/v1/thumbnail_resource.py | 1 - synapse/rest/media/v1/thumbnailer.py | 1 - synapse/rest/media/v1/upload_resource.py | 1 - synapse/rest/synapse/__init__.py | 1 - synapse/rest/synapse/client/__init__.py | 1 - synapse/rest/synapse/client/new_user_consent.py | 1 - synapse/rest/synapse/client/oidc/__init__.py | 1 - synapse/rest/synapse/client/oidc/callback_resource.py | 1 - synapse/rest/synapse/client/password_reset.py | 1 - synapse/rest/synapse/client/pick_idp.py | 1 - synapse/rest/synapse/client/pick_username.py | 1 - synapse/rest/synapse/client/saml2/__init__.py | 1 - synapse/rest/synapse/client/saml2/metadata_resource.py | 1 - synapse/rest/synapse/client/saml2/response_resource.py | 1 - synapse/rest/synapse/client/sso_register.py | 1 - synapse/rest/well_known.py | 1 - synapse/secrets.py | 1 - synapse/server.py | 1 - synapse/server_notices/consent_server_notices.py | 1 - synapse/server_notices/resource_limits_server_notices.py | 1 - synapse/server_notices/server_notices_manager.py | 1 - synapse/server_notices/server_notices_sender.py | 1 - synapse/server_notices/worker_server_notices_sender.py | 1 - synapse/spam_checker_api/__init__.py | 1 - synapse/state/__init__.py | 1 - synapse/state/v1.py | 1 - synapse/state/v2.py | 1 - synapse/storage/__init__.py | 1 - synapse/storage/_base.py | 1 - synapse/storage/background_updates.py | 1 - synapse/storage/database.py | 1 - synapse/storage/databases/__init__.py | 1 - synapse/storage/databases/main/__init__.py | 1 - synapse/storage/databases/main/account_data.py | 1 - synapse/storage/databases/main/appservice.py | 1 - synapse/storage/databases/main/cache.py | 1 - synapse/storage/databases/main/censor_events.py | 1 - synapse/storage/databases/main/client_ips.py | 1 - synapse/storage/databases/main/deviceinbox.py | 1 - synapse/storage/databases/main/devices.py | 1 - synapse/storage/databases/main/directory.py | 1 - synapse/storage/databases/main/e2e_room_keys.py | 1 - synapse/storage/databases/main/end_to_end_keys.py | 1 - synapse/storage/databases/main/event_federation.py | 1 - synapse/storage/databases/main/event_push_actions.py | 1 - synapse/storage/databases/main/events.py | 1 - synapse/storage/databases/main/events_bg_updates.py | 1 - synapse/storage/databases/main/events_forward_extremities.py | 1 - synapse/storage/databases/main/events_worker.py | 1 - synapse/storage/databases/main/filtering.py | 1 - synapse/storage/databases/main/group_server.py | 1 - synapse/storage/databases/main/keys.py | 1 - synapse/storage/databases/main/media_repository.py | 1 - synapse/storage/databases/main/metrics.py | 1 - synapse/storage/databases/main/monthly_active_users.py | 1 - synapse/storage/databases/main/presence.py | 1 - synapse/storage/databases/main/profile.py | 1 - synapse/storage/databases/main/purge_events.py | 1 - synapse/storage/databases/main/push_rule.py | 1 - synapse/storage/databases/main/pusher.py | 1 - synapse/storage/databases/main/receipts.py | 1 - synapse/storage/databases/main/registration.py | 1 - synapse/storage/databases/main/rejections.py | 1 - synapse/storage/databases/main/relations.py | 1 - synapse/storage/databases/main/room.py | 1 - synapse/storage/databases/main/roommember.py | 1 - .../databases/main/schema/delta/50/make_event_content_nullable.py | 1 - .../storage/databases/main/schema/delta/57/local_current_membership.py | 1 - synapse/storage/databases/main/search.py | 1 - synapse/storage/databases/main/signatures.py | 1 - synapse/storage/databases/main/state.py | 1 - synapse/storage/databases/main/state_deltas.py | 1 - synapse/storage/databases/main/stats.py | 1 - synapse/storage/databases/main/stream.py | 1 - synapse/storage/databases/main/tags.py | 1 - synapse/storage/databases/main/transactions.py | 1 - synapse/storage/databases/main/ui_auth.py | 1 - synapse/storage/databases/main/user_directory.py | 1 - synapse/storage/databases/main/user_erasure_store.py | 1 - synapse/storage/databases/state/__init__.py | 1 - synapse/storage/databases/state/bg_updates.py | 1 - synapse/storage/databases/state/store.py | 1 - synapse/storage/engines/__init__.py | 1 - synapse/storage/engines/_base.py | 1 - synapse/storage/engines/postgres.py | 1 - synapse/storage/engines/sqlite.py | 1 - synapse/storage/keys.py | 1 - synapse/storage/persist_events.py | 1 - synapse/storage/prepare_database.py | 1 - synapse/storage/purge_events.py | 1 - synapse/storage/push_rule.py | 1 - synapse/storage/relations.py | 1 - synapse/storage/roommember.py | 1 - synapse/storage/state.py | 1 - synapse/storage/types.py | 1 - synapse/storage/util/__init__.py | 1 - synapse/storage/util/id_generators.py | 1 - synapse/storage/util/sequence.py | 1 - synapse/streams/__init__.py | 1 - synapse/streams/config.py | 1 - synapse/streams/events.py | 1 - synapse/types.py | 1 - synapse/util/__init__.py | 1 - synapse/util/async_helpers.py | 1 - synapse/util/caches/__init__.py | 1 - synapse/util/caches/cached_call.py | 1 - synapse/util/caches/deferred_cache.py | 1 - synapse/util/caches/descriptors.py | 1 - synapse/util/caches/dictionary_cache.py | 1 - synapse/util/caches/expiringcache.py | 1 - synapse/util/caches/lrucache.py | 1 - synapse/util/caches/response_cache.py | 1 - synapse/util/caches/stream_change_cache.py | 1 - synapse/util/caches/ttlcache.py | 1 - synapse/util/daemonize.py | 1 - synapse/util/distributor.py | 1 - synapse/util/file_consumer.py | 1 - synapse/util/frozenutils.py | 1 - synapse/util/hash.py | 2 -- synapse/util/iterutils.py | 1 - synapse/util/jsonobject.py | 1 - synapse/util/macaroons.py | 1 - synapse/util/metrics.py | 1 - synapse/util/module_loader.py | 1 - synapse/util/msisdn.py | 1 - synapse/util/patch_inline_callbacks.py | 1 - synapse/util/ratelimitutils.py | 1 - synapse/util/retryutils.py | 1 - synapse/util/rlimit.py | 1 - synapse/util/stringutils.py | 1 - synapse/util/templates.py | 1 - synapse/util/threepids.py | 1 - synapse/util/versionstring.py | 1 - synapse/util/wheel_timer.py | 1 - synapse/visibility.py | 1 - synctl | 1 - synmark/__init__.py | 1 - synmark/__main__.py | 1 - synmark/suites/logging.py | 1 - synmark/suites/lrucache.py | 1 - synmark/suites/lrucache_evict.py | 1 - tests/__init__.py | 1 - tests/api/test_auth.py | 1 - tests/api/test_filtering.py | 1 - tests/app/test_frontend_proxy.py | 1 - tests/app/test_openid_listener.py | 1 - tests/appservice/__init__.py | 1 - tests/appservice/test_appservice.py | 1 - tests/appservice/test_scheduler.py | 1 - tests/config/__init__.py | 1 - tests/config/test_base.py | 1 - tests/config/test_cache.py | 1 - tests/config/test_database.py | 1 - tests/config/test_generate.py | 1 - tests/config/test_load.py | 1 - tests/config/test_ratelimiting.py | 1 - tests/config/test_room_directory.py | 1 - tests/config/test_server.py | 1 - tests/config/test_tls.py | 1 - tests/config/test_util.py | 1 - tests/crypto/__init__.py | 1 - tests/crypto/test_event_signing.py | 1 - tests/crypto/test_keyring.py | 1 - tests/events/test_presence_router.py | 1 - tests/events/test_snapshot.py | 1 - tests/events/test_utils.py | 1 - tests/federation/test_complexity.py | 1 - tests/federation/test_federation_sender.py | 1 - tests/federation/test_federation_server.py | 1 - tests/federation/transport/test_server.py | 1 - tests/handlers/test_admin.py | 1 - tests/handlers/test_appservice.py | 1 - tests/handlers/test_auth.py | 1 - tests/handlers/test_device.py | 1 - tests/handlers/test_directory.py | 1 - tests/handlers/test_e2e_keys.py | 1 - tests/handlers/test_e2e_room_keys.py | 1 - tests/handlers/test_federation.py | 1 - tests/handlers/test_message.py | 1 - tests/handlers/test_oidc.py | 1 - tests/handlers/test_password_providers.py | 1 - tests/handlers/test_presence.py | 1 - tests/handlers/test_profile.py | 1 - tests/handlers/test_register.py | 1 - tests/handlers/test_stats.py | 1 - tests/handlers/test_sync.py | 1 - tests/handlers/test_typing.py | 1 - tests/handlers/test_user_directory.py | 1 - tests/http/__init__.py | 1 - tests/http/federation/__init__.py | 1 - tests/http/federation/test_matrix_federation_agent.py | 1 - tests/http/federation/test_srv_resolver.py | 1 - tests/http/test_additional_resource.py | 1 - tests/http/test_endpoint.py | 1 - tests/http/test_fedclient.py | 1 - tests/http/test_proxyagent.py | 1 - tests/http/test_servlet.py | 1 - tests/http/test_simple_client.py | 1 - tests/logging/__init__.py | 1 - tests/logging/test_remote_handler.py | 1 - tests/logging/test_terse_json.py | 1 - tests/module_api/test_api.py | 1 - tests/push/test_email.py | 1 - tests/push/test_http.py | 1 - tests/push/test_push_rule_evaluator.py | 1 - tests/replication/__init__.py | 1 - tests/replication/_base.py | 1 - tests/replication/slave/__init__.py | 1 - tests/replication/slave/storage/__init__.py | 1 - tests/replication/tcp/__init__.py | 1 - tests/replication/tcp/streams/__init__.py | 1 - tests/replication/tcp/streams/test_account_data.py | 1 - tests/replication/tcp/streams/test_events.py | 1 - tests/replication/tcp/streams/test_federation.py | 1 - tests/replication/tcp/streams/test_receipts.py | 1 - tests/replication/tcp/streams/test_typing.py | 1 - tests/replication/tcp/test_commands.py | 1 - tests/replication/tcp/test_remote_server_up.py | 1 - tests/replication/test_auth.py | 1 - tests/replication/test_client_reader_shard.py | 1 - tests/replication/test_federation_ack.py | 1 - tests/replication/test_federation_sender_shard.py | 1 - tests/replication/test_multi_media_repo.py | 1 - tests/replication/test_pusher_shard.py | 1 - tests/replication/test_sharded_event_persister.py | 1 - tests/rest/__init__.py | 1 - tests/rest/admin/__init__.py | 1 - tests/rest/admin/test_admin.py | 1 - tests/rest/admin/test_device.py | 1 - tests/rest/admin/test_event_reports.py | 1 - tests/rest/admin/test_media.py | 1 - tests/rest/admin/test_room.py | 1 - tests/rest/admin/test_statistics.py | 1 - tests/rest/admin/test_user.py | 1 - tests/rest/client/__init__.py | 1 - tests/rest/client/test_consent.py | 1 - tests/rest/client/test_ephemeral_message.py | 1 - tests/rest/client/test_identity.py | 1 - tests/rest/client/test_power_levels.py | 1 - tests/rest/client/test_redactions.py | 1 - tests/rest/client/test_retention.py | 1 - tests/rest/client/test_third_party_rules.py | 1 - tests/rest/client/v1/__init__.py | 1 - tests/rest/client/v1/test_directory.py | 1 - tests/rest/client/v1/test_events.py | 1 - tests/rest/client/v1/test_login.py | 1 - tests/rest/client/v1/test_presence.py | 1 - tests/rest/client/v1/test_profile.py | 1 - tests/rest/client/v1/test_push_rule_attrs.py | 1 - tests/rest/client/v1/test_rooms.py | 1 - tests/rest/client/v1/test_typing.py | 1 - tests/rest/client/v1/utils.py | 1 - tests/rest/client/v2_alpha/test_account.py | 1 - tests/rest/client/v2_alpha/test_auth.py | 1 - tests/rest/client/v2_alpha/test_capabilities.py | 1 - tests/rest/client/v2_alpha/test_filter.py | 1 - tests/rest/client/v2_alpha/test_password_policy.py | 1 - tests/rest/client/v2_alpha/test_register.py | 1 - tests/rest/client/v2_alpha/test_relations.py | 1 - tests/rest/client/v2_alpha/test_shared_rooms.py | 1 - tests/rest/client/v2_alpha/test_sync.py | 1 - tests/rest/client/v2_alpha/test_upgrade_room.py | 1 - tests/rest/key/v2/test_remote_key_resource.py | 1 - tests/rest/media/__init__.py | 1 - tests/rest/media/v1/__init__.py | 1 - tests/rest/media/v1/test_base.py | 1 - tests/rest/media/v1/test_media_storage.py | 1 - tests/rest/media/v1/test_url_preview.py | 1 - tests/rest/test_health.py | 1 - tests/rest/test_well_known.py | 1 - tests/scripts/test_new_matrix_user.py | 1 - tests/server_notices/test_consent.py | 1 - tests/server_notices/test_resource_limits_server_notices.py | 1 - tests/state/test_v2.py | 1 - tests/storage/test__base.py | 1 - tests/storage/test_account_data.py | 1 - tests/storage/test_appservice.py | 1 - tests/storage/test_base.py | 1 - tests/storage/test_cleanup_extrems.py | 1 - tests/storage/test_client_ips.py | 1 - tests/storage/test_database.py | 1 - tests/storage/test_devices.py | 1 - tests/storage/test_directory.py | 1 - tests/storage/test_e2e_room_keys.py | 1 - tests/storage/test_end_to_end_keys.py | 1 - tests/storage/test_event_chain.py | 1 - tests/storage/test_event_federation.py | 1 - tests/storage/test_event_metrics.py | 1 - tests/storage/test_event_push_actions.py | 1 - tests/storage/test_events.py | 1 - tests/storage/test_id_generators.py | 1 - tests/storage/test_keys.py | 1 - tests/storage/test_main.py | 1 - tests/storage/test_monthly_active_users.py | 1 - tests/storage/test_profile.py | 1 - tests/storage/test_purge.py | 1 - tests/storage/test_redaction.py | 1 - tests/storage/test_registration.py | 1 - tests/storage/test_room.py | 1 - tests/storage/test_roommember.py | 1 - tests/storage/test_state.py | 1 - tests/storage/test_transactions.py | 1 - tests/storage/test_user_directory.py | 1 - tests/test_distributor.py | 1 - tests/test_event_auth.py | 1 - tests/test_federation.py | 1 - tests/test_mau.py | 1 - tests/test_metrics.py | 1 - tests/test_phone_home.py | 1 - tests/test_preview.py | 1 - tests/test_state.py | 1 - tests/test_test_utils.py | 1 - tests/test_types.py | 1 - tests/test_utils/__init__.py | 1 - tests/test_utils/event_injection.py | 1 - tests/test_utils/html_parsers.py | 1 - tests/test_utils/logging_setup.py | 1 - tests/test_visibility.py | 1 - tests/unittest.py | 1 - tests/util/__init__.py | 1 - tests/util/caches/__init__.py | 1 - tests/util/caches/test_cached_call.py | 1 - tests/util/caches/test_deferred_cache.py | 1 - tests/util/caches/test_descriptors.py | 1 - tests/util/caches/test_ttlcache.py | 1 - tests/util/test_async_utils.py | 1 - tests/util/test_dict_cache.py | 1 - tests/util/test_expiring_cache.py | 1 - tests/util/test_file_consumer.py | 1 - tests/util/test_itertools.py | 1 - tests/util/test_linearizer.py | 1 - tests/util/test_logformatter.py | 1 - tests/util/test_lrucache.py | 1 - tests/util/test_ratelimitutils.py | 1 - tests/util/test_retryutils.py | 1 - tests/util/test_rwlock.py | 1 - tests/util/test_stringutils.py | 1 - tests/util/test_threepids.py | 1 - tests/util/test_treecache.py | 1 - tests/util/test_wheel_timer.py | 1 - tests/utils.py | 1 - 651 files changed, 1 insertion(+), 651 deletions(-) create mode 100644 changelog.d/9786.misc diff --git a/.buildkite/scripts/create_postgres_db.py b/.buildkite/scripts/create_postgres_db.py index 956339de5c..cc829db216 100755 --- a/.buildkite/scripts/create_postgres_db.py +++ b/.buildkite/scripts/create_postgres_db.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/changelog.d/9786.misc b/changelog.d/9786.misc new file mode 100644 index 0000000000..cf265db749 --- /dev/null +++ b/changelog.d/9786.misc @@ -0,0 +1 @@ +Apply `pyupgrade` across the codebase. \ No newline at end of file diff --git a/contrib/cmdclient/http.py b/contrib/cmdclient/http.py index 1cf913756e..1310f078e3 100644 --- a/contrib/cmdclient/http.py +++ b/contrib/cmdclient/http.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/contrib/experiments/test_messaging.py b/contrib/experiments/test_messaging.py index 7fbc7d8fc6..31b8a68225 100644 --- a/contrib/experiments/test_messaging.py +++ b/contrib/experiments/test_messaging.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/scripts-dev/mypy_synapse_plugin.py b/scripts-dev/mypy_synapse_plugin.py index 18df68305b..1217e14874 100644 --- a/scripts-dev/mypy_synapse_plugin.py +++ b/scripts-dev/mypy_synapse_plugin.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/scripts-dev/sign_json b/scripts-dev/sign_json index 44553fb79a..4a43d3f2b0 100755 --- a/scripts-dev/sign_json +++ b/scripts-dev/sign_json @@ -1,6 +1,5 @@ #!/usr/bin/env python # -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/scripts-dev/update_database b/scripts-dev/update_database index 56365e2b58..87f709b6ed 100755 --- a/scripts-dev/update_database +++ b/scripts-dev/update_database @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/scripts/export_signing_key b/scripts/export_signing_key index 8aec9d802b..0ed167ea85 100755 --- a/scripts/export_signing_key +++ b/scripts/export_signing_key @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/scripts/generate_log_config b/scripts/generate_log_config index a13a5634a3..e72a0dafb7 100755 --- a/scripts/generate_log_config +++ b/scripts/generate_log_config @@ -1,6 +1,5 @@ #!/usr/bin/env python3 -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/scripts/generate_signing_key.py b/scripts/generate_signing_key.py index 16d7c4f382..07df25a809 100755 --- a/scripts/generate_signing_key.py +++ b/scripts/generate_signing_key.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/scripts/move_remote_media_to_new_store.py b/scripts/move_remote_media_to_new_store.py index 8477955a90..875aa4781f 100755 --- a/scripts/move_remote_media_to_new_store.py +++ b/scripts/move_remote_media_to_new_store.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/scripts/register_new_matrix_user b/scripts/register_new_matrix_user index 8b9d30877d..00104b9d62 100755 --- a/scripts/register_new_matrix_user +++ b/scripts/register_new_matrix_user @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/scripts/synapse_port_db b/scripts/synapse_port_db index 58edf6af6c..b7c1ffc956 100755 --- a/scripts/synapse_port_db +++ b/scripts/synapse_port_db @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/stubs/frozendict.pyi b/stubs/frozendict.pyi index 0368ba4703..24c6f3af77 100644 --- a/stubs/frozendict.pyi +++ b/stubs/frozendict.pyi @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/stubs/txredisapi.pyi b/stubs/txredisapi.pyi index 080ca40287..c1a06ae022 100644 --- a/stubs/txredisapi.pyi +++ b/stubs/txredisapi.pyi @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/__init__.py b/synapse/__init__.py index 125a73d378..c2709c2ad4 100644 --- a/synapse/__init__.py +++ b/synapse/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018-9 New Vector Ltd # diff --git a/synapse/_scripts/register_new_matrix_user.py b/synapse/_scripts/register_new_matrix_user.py index dfe26dea6d..dae986c788 100644 --- a/synapse/_scripts/register_new_matrix_user.py +++ b/synapse/_scripts/register_new_matrix_user.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2018 New Vector # diff --git a/synapse/api/__init__.py b/synapse/api/__init__.py index bfebb0f644..5e83dba2ed 100644 --- a/synapse/api/__init__.py +++ b/synapse/api/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/api/auth.py b/synapse/api/auth.py index 7d9930ae7b..6c13f53957 100644 --- a/synapse/api/auth.py +++ b/synapse/api/auth.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/api/auth_blocking.py b/synapse/api/auth_blocking.py index d8088f524a..a8df60cb89 100644 --- a/synapse/api/auth_blocking.py +++ b/synapse/api/auth_blocking.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/api/constants.py b/synapse/api/constants.py index a8ae41de48..31a59bceec 100644 --- a/synapse/api/constants.py +++ b/synapse/api/constants.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # Copyright 2018-2019 New Vector Ltd diff --git a/synapse/api/errors.py b/synapse/api/errors.py index 2a789ea3e8..0231c79079 100644 --- a/synapse/api/errors.py +++ b/synapse/api/errors.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/api/filtering.py b/synapse/api/filtering.py index 5caf336fd0..ce49a0ad58 100644 --- a/synapse/api/filtering.py +++ b/synapse/api/filtering.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # Copyright 2018-2019 New Vector Ltd diff --git a/synapse/api/presence.py b/synapse/api/presence.py index b9a8e29460..a3bf0348d1 100644 --- a/synapse/api/presence.py +++ b/synapse/api/presence.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/api/room_versions.py b/synapse/api/room_versions.py index 87038d436d..c9f9596ada 100644 --- a/synapse/api/room_versions.py +++ b/synapse/api/room_versions.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/api/urls.py b/synapse/api/urls.py index 6379c86dde..4b1f213c75 100644 --- a/synapse/api/urls.py +++ b/synapse/api/urls.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/app/__init__.py b/synapse/app/__init__.py index d1a2cd5e19..f9940491e8 100644 --- a/synapse/app/__init__.py +++ b/synapse/app/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/app/_base.py b/synapse/app/_base.py index 3912c8994c..2113c4f370 100644 --- a/synapse/app/_base.py +++ b/synapse/app/_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # Copyright 2019-2021 The Matrix.org Foundation C.I.C # diff --git a/synapse/app/admin_cmd.py b/synapse/app/admin_cmd.py index 9f99651aa2..eb256db749 100644 --- a/synapse/app/admin_cmd.py +++ b/synapse/app/admin_cmd.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2019 Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/app/appservice.py b/synapse/app/appservice.py index add43147b3..2d50060ffb 100644 --- a/synapse/app/appservice.py +++ b/synapse/app/appservice.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/app/client_reader.py b/synapse/app/client_reader.py index add43147b3..2d50060ffb 100644 --- a/synapse/app/client_reader.py +++ b/synapse/app/client_reader.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/app/event_creator.py b/synapse/app/event_creator.py index e9c098c4e7..57af28f10a 100644 --- a/synapse/app/event_creator.py +++ b/synapse/app/event_creator.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/app/federation_reader.py b/synapse/app/federation_reader.py index add43147b3..2d50060ffb 100644 --- a/synapse/app/federation_reader.py +++ b/synapse/app/federation_reader.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/app/federation_sender.py b/synapse/app/federation_sender.py index add43147b3..2d50060ffb 100644 --- a/synapse/app/federation_sender.py +++ b/synapse/app/federation_sender.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/app/frontend_proxy.py b/synapse/app/frontend_proxy.py index add43147b3..2d50060ffb 100644 --- a/synapse/app/frontend_proxy.py +++ b/synapse/app/frontend_proxy.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/app/generic_worker.py b/synapse/app/generic_worker.py index d1c2079233..e35e17492c 100644 --- a/synapse/app/generic_worker.py +++ b/synapse/app/generic_worker.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. # diff --git a/synapse/app/homeserver.py b/synapse/app/homeserver.py index 3bfe9d507f..679b7f4289 100644 --- a/synapse/app/homeserver.py +++ b/synapse/app/homeserver.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2019 New Vector Ltd # diff --git a/synapse/app/media_repository.py b/synapse/app/media_repository.py index add43147b3..2d50060ffb 100644 --- a/synapse/app/media_repository.py +++ b/synapse/app/media_repository.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/app/pusher.py b/synapse/app/pusher.py index add43147b3..2d50060ffb 100644 --- a/synapse/app/pusher.py +++ b/synapse/app/pusher.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/app/synchrotron.py b/synapse/app/synchrotron.py index add43147b3..2d50060ffb 100644 --- a/synapse/app/synchrotron.py +++ b/synapse/app/synchrotron.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/app/user_dir.py b/synapse/app/user_dir.py index 503d44f687..a368efb354 100644 --- a/synapse/app/user_dir.py +++ b/synapse/app/user_dir.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/appservice/__init__.py b/synapse/appservice/__init__.py index 0bfc5e445f..6504c6bd3f 100644 --- a/synapse/appservice/__init__.py +++ b/synapse/appservice/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/appservice/api.py b/synapse/appservice/api.py index 9d3bbe3b8b..fe04d7a672 100644 --- a/synapse/appservice/api.py +++ b/synapse/appservice/api.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/appservice/scheduler.py b/synapse/appservice/scheduler.py index 5203ffe90f..6a2ce99b55 100644 --- a/synapse/appservice/scheduler.py +++ b/synapse/appservice/scheduler.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/__init__.py b/synapse/config/__init__.py index 1e76e9559d..d2f889159e 100644 --- a/synapse/config/__init__.py +++ b/synapse/config/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/__main__.py b/synapse/config/__main__.py index 65043d5b5b..b5b6735a8f 100644 --- a/synapse/config/__main__.py +++ b/synapse/config/__main__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/_base.py b/synapse/config/_base.py index ba9cd63cf2..08e2c2c543 100644 --- a/synapse/config/_base.py +++ b/synapse/config/_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017-2018 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/synapse/config/_util.py b/synapse/config/_util.py index 8fce7f6bb1..3edb4b7106 100644 --- a/synapse/config/_util.py +++ b/synapse/config/_util.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/auth.py b/synapse/config/auth.py index 9aabaadf9e..e10d641a96 100644 --- a/synapse/config/auth.py +++ b/synapse/config/auth.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. # diff --git a/synapse/config/cache.py b/synapse/config/cache.py index 4e8abbf88a..41b9b3f51f 100644 --- a/synapse/config/cache.py +++ b/synapse/config/cache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/cas.py b/synapse/config/cas.py index dbf5085965..901f4123e1 100644 --- a/synapse/config/cas.py +++ b/synapse/config/cas.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/consent_config.py b/synapse/config/consent_config.py index c47f364b14..30d07cc219 100644 --- a/synapse/config/consent_config.py +++ b/synapse/config/consent_config.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/database.py b/synapse/config/database.py index e7889b9c20..79a02706b4 100644 --- a/synapse/config/database.py +++ b/synapse/config/database.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. # diff --git a/synapse/config/emailconfig.py b/synapse/config/emailconfig.py index 52505ac5d2..c587939c7a 100644 --- a/synapse/config/emailconfig.py +++ b/synapse/config/emailconfig.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015-2016 OpenMarket Ltd # Copyright 2017-2018 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/synapse/config/experimental.py b/synapse/config/experimental.py index eb96ecda74..a693fba877 100644 --- a/synapse/config/experimental.py +++ b/synapse/config/experimental.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/federation.py b/synapse/config/federation.py index 55e4db5442..090ba047fa 100644 --- a/synapse/config/federation.py +++ b/synapse/config/federation.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/groups.py b/synapse/config/groups.py index 7b7860ea71..15c2e64bda 100644 --- a/synapse/config/groups.py +++ b/synapse/config/groups.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/homeserver.py b/synapse/config/homeserver.py index 64a2429f77..1309535068 100644 --- a/synapse/config/homeserver.py +++ b/synapse/config/homeserver.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/config/jwt_config.py b/synapse/config/jwt_config.py index f30330abb6..9e07e73008 100644 --- a/synapse/config/jwt_config.py +++ b/synapse/config/jwt_config.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015 Niklas Riekenbrauck # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/key.py b/synapse/config/key.py index 350ff1d665..94a9063043 100644 --- a/synapse/config/key.py +++ b/synapse/config/key.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. # diff --git a/synapse/config/logger.py b/synapse/config/logger.py index 999aecce5c..b174e0df6d 100644 --- a/synapse/config/logger.py +++ b/synapse/config/logger.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/metrics.py b/synapse/config/metrics.py index 2b289f4208..7ac82edb0e 100644 --- a/synapse/config/metrics.py +++ b/synapse/config/metrics.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. # diff --git a/synapse/config/oidc_config.py b/synapse/config/oidc_config.py index 05733ec41d..5fb94376fd 100644 --- a/synapse/config/oidc_config.py +++ b/synapse/config/oidc_config.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Quentin Gliech # Copyright 2020-2021 The Matrix.org Foundation C.I.C. # diff --git a/synapse/config/password_auth_providers.py b/synapse/config/password_auth_providers.py index 85d07c4f8f..1cf69734bb 100644 --- a/synapse/config/password_auth_providers.py +++ b/synapse/config/password_auth_providers.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 Openmarket # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/push.py b/synapse/config/push.py index 7831a2ef79..6ef8491caf 100644 --- a/synapse/config/push.py +++ b/synapse/config/push.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2017 New Vector Ltd # diff --git a/synapse/config/redis.py b/synapse/config/redis.py index 1373302335..33104af734 100644 --- a/synapse/config/redis.py +++ b/synapse/config/redis.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/registration.py b/synapse/config/registration.py index f27d1e14ac..f8a2768af8 100644 --- a/synapse/config/registration.py +++ b/synapse/config/registration.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/repository.py b/synapse/config/repository.py index 061c4ec83f..146bc55d6f 100644 --- a/synapse/config/repository.py +++ b/synapse/config/repository.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014, 2015 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/room.py b/synapse/config/room.py index 692d7a1936..d889d90dbc 100644 --- a/synapse/config/room.py +++ b/synapse/config/room.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/room_directory.py b/synapse/config/room_directory.py index 2dd719c388..56981cac79 100644 --- a/synapse/config/room_directory.py +++ b/synapse/config/room_directory.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/saml2_config.py b/synapse/config/saml2_config.py index 6db9cb5ced..55a7838b10 100644 --- a/synapse/config/saml2_config.py +++ b/synapse/config/saml2_config.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. # diff --git a/synapse/config/server.py b/synapse/config/server.py index 8decc9d10d..02b86b11a5 100644 --- a/synapse/config/server.py +++ b/synapse/config/server.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017-2018 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/synapse/config/server_notices_config.py b/synapse/config/server_notices_config.py index 57f69dc8e2..48bf3241b6 100644 --- a/synapse/config/server_notices_config.py +++ b/synapse/config/server_notices_config.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/spam_checker.py b/synapse/config/spam_checker.py index 3d05abc158..447ba3303b 100644 --- a/synapse/config/spam_checker.py +++ b/synapse/config/spam_checker.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/sso.py b/synapse/config/sso.py index 243cc681e8..af645c930d 100644 --- a/synapse/config/sso.py +++ b/synapse/config/sso.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/stats.py b/synapse/config/stats.py index 2258329a52..3d44b51201 100644 --- a/synapse/config/stats.py +++ b/synapse/config/stats.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/third_party_event_rules.py b/synapse/config/third_party_event_rules.py index c04e1c4e07..f502ff539e 100644 --- a/synapse/config/third_party_event_rules.py +++ b/synapse/config/third_party_event_rules.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/tls.py b/synapse/config/tls.py index 85b5db4c40..b041869758 100644 --- a/synapse/config/tls.py +++ b/synapse/config/tls.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/tracer.py b/synapse/config/tracer.py index 727a1e7008..db22b5b19f 100644 --- a/synapse/config/tracer.py +++ b/synapse/config/tracer.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C.d # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/user_directory.py b/synapse/config/user_directory.py index 8d05ef173c..4cbf79eeed 100644 --- a/synapse/config/user_directory.py +++ b/synapse/config/user_directory.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/config/workers.py b/synapse/config/workers.py index ac92375a85..b2540163d1 100644 --- a/synapse/config/workers.py +++ b/synapse/config/workers.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/crypto/__init__.py b/synapse/crypto/__init__.py index bfebb0f644..5e83dba2ed 100644 --- a/synapse/crypto/__init__.py +++ b/synapse/crypto/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/crypto/event_signing.py b/synapse/crypto/event_signing.py index 8fb116ae18..0f2b632e47 100644 --- a/synapse/crypto/event_signing.py +++ b/synapse/crypto/event_signing.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # Copyright 2014-2016 OpenMarket Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. diff --git a/synapse/crypto/keyring.py b/synapse/crypto/keyring.py index d5fb51513b..40073dc7c2 100644 --- a/synapse/crypto/keyring.py +++ b/synapse/crypto/keyring.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017, 2018 New Vector Ltd # diff --git a/synapse/event_auth.py b/synapse/event_auth.py index 9863953f5c..5234e3f81e 100644 --- a/synapse/event_auth.py +++ b/synapse/event_auth.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. # diff --git a/synapse/events/__init__.py b/synapse/events/__init__.py index f9032e3697..c8b52cbc7a 100644 --- a/synapse/events/__init__.py +++ b/synapse/events/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2019 New Vector Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. diff --git a/synapse/events/builder.py b/synapse/events/builder.py index c1c0426f6e..5793553a88 100644 --- a/synapse/events/builder.py +++ b/synapse/events/builder.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/events/presence_router.py b/synapse/events/presence_router.py index 24cd389d80..6c37c8a7a4 100644 --- a/synapse/events/presence_router.py +++ b/synapse/events/presence_router.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/events/snapshot.py b/synapse/events/snapshot.py index 7295df74fe..f8d898c3b1 100644 --- a/synapse/events/snapshot.py +++ b/synapse/events/snapshot.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/events/spamcheck.py b/synapse/events/spamcheck.py index a9185987a2..c727b48c1e 100644 --- a/synapse/events/spamcheck.py +++ b/synapse/events/spamcheck.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. # diff --git a/synapse/events/third_party_rules.py b/synapse/events/third_party_rules.py index 9767d23940..f7944fd834 100644 --- a/synapse/events/third_party_rules.py +++ b/synapse/events/third_party_rules.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/events/utils.py b/synapse/events/utils.py index 0f8a3b5ad8..7d7cd9aaee 100644 --- a/synapse/events/utils.py +++ b/synapse/events/utils.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/events/validator.py b/synapse/events/validator.py index f8f3b1a31e..fa6987d7cb 100644 --- a/synapse/events/validator.py +++ b/synapse/events/validator.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/federation/__init__.py b/synapse/federation/__init__.py index f5f0bdfca3..46300cba25 100644 --- a/synapse/federation/__init__.py +++ b/synapse/federation/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/federation/federation_base.py b/synapse/federation/federation_base.py index 383737520a..949dcd4614 100644 --- a/synapse/federation/federation_base.py +++ b/synapse/federation/federation_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. # diff --git a/synapse/federation/federation_client.py b/synapse/federation/federation_client.py index 55533d7501..f93335edaa 100644 --- a/synapse/federation/federation_client.py +++ b/synapse/federation/federation_client.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/federation/federation_server.py b/synapse/federation/federation_server.py index b9f8d966a6..3ff6479cfb 100644 --- a/synapse/federation/federation_server.py +++ b/synapse/federation/federation_server.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # Copyright 2019 Matrix.org Federation C.I.C diff --git a/synapse/federation/persistence.py b/synapse/federation/persistence.py index ce5fc758f0..2f9c9bc2cd 100644 --- a/synapse/federation/persistence.py +++ b/synapse/federation/persistence.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/federation/send_queue.py b/synapse/federation/send_queue.py index 0c18c49abb..e3f0bc2471 100644 --- a/synapse/federation/send_queue.py +++ b/synapse/federation/send_queue.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/federation/sender/__init__.py b/synapse/federation/sender/__init__.py index d821dcbf6a..155161685d 100644 --- a/synapse/federation/sender/__init__.py +++ b/synapse/federation/sender/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/federation/sender/per_destination_queue.py b/synapse/federation/sender/per_destination_queue.py index e9c8a9f20a..3b053ebcfb 100644 --- a/synapse/federation/sender/per_destination_queue.py +++ b/synapse/federation/sender/per_destination_queue.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2019 New Vector Ltd # diff --git a/synapse/federation/sender/transaction_manager.py b/synapse/federation/sender/transaction_manager.py index 07b740c2f2..12fe3a719b 100644 --- a/synapse/federation/sender/transaction_manager.py +++ b/synapse/federation/sender/transaction_manager.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/federation/transport/__init__.py b/synapse/federation/transport/__init__.py index 5db733af98..3c9a0f6944 100644 --- a/synapse/federation/transport/__init__.py +++ b/synapse/federation/transport/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/federation/transport/client.py b/synapse/federation/transport/client.py index 6aee47c431..ada322a81e 100644 --- a/synapse/federation/transport/client.py +++ b/synapse/federation/transport/client.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/federation/transport/server.py b/synapse/federation/transport/server.py index a9c1391d27..a3759bdda1 100644 --- a/synapse/federation/transport/server.py +++ b/synapse/federation/transport/server.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/synapse/federation/units.py b/synapse/federation/units.py index 0f8bf000ac..c83a261918 100644 --- a/synapse/federation/units.py +++ b/synapse/federation/units.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/groups/attestations.py b/synapse/groups/attestations.py index 368c44708d..d2fc8be5f5 100644 --- a/synapse/groups/attestations.py +++ b/synapse/groups/attestations.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/groups/groups_server.py b/synapse/groups/groups_server.py index 4b16a4ac29..a06d060ebf 100644 --- a/synapse/groups/groups_server.py +++ b/synapse/groups/groups_server.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # Copyright 2018 New Vector Ltd # Copyright 2019 Michael Telatynski <7t3chguy@gmail.com> diff --git a/synapse/handlers/__init__.py b/synapse/handlers/__init__.py index bfebb0f644..5e83dba2ed 100644 --- a/synapse/handlers/__init__.py +++ b/synapse/handlers/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/_base.py b/synapse/handlers/_base.py index fb899aa90d..d800e16912 100644 --- a/synapse/handlers/_base.py +++ b/synapse/handlers/_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/account_data.py b/synapse/handlers/account_data.py index 1ce6d697ed..affb54e0ee 100644 --- a/synapse/handlers/account_data.py +++ b/synapse/handlers/account_data.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2021 The Matrix.org Foundation C.I.C. # diff --git a/synapse/handlers/account_validity.py b/synapse/handlers/account_validity.py index bee1447c2e..66ce7e8b83 100644 --- a/synapse/handlers/account_validity.py +++ b/synapse/handlers/account_validity.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/acme.py b/synapse/handlers/acme.py index 2a25af6288..16ab93f580 100644 --- a/synapse/handlers/acme.py +++ b/synapse/handlers/acme.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/acme_issuing_service.py b/synapse/handlers/acme_issuing_service.py index ae2a9dd9c2..a972d3fa0a 100644 --- a/synapse/handlers/acme_issuing_service.py +++ b/synapse/handlers/acme_issuing_service.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. # diff --git a/synapse/handlers/admin.py b/synapse/handlers/admin.py index c494de49a3..f72ded038e 100644 --- a/synapse/handlers/admin.py +++ b/synapse/handlers/admin.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/appservice.py b/synapse/handlers/appservice.py index 9fb7ee335d..d7bc4e23ed 100644 --- a/synapse/handlers/appservice.py +++ b/synapse/handlers/appservice.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/auth.py b/synapse/handlers/auth.py index 08e413bc98..b8a37b6477 100644 --- a/synapse/handlers/auth.py +++ b/synapse/handlers/auth.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # Copyright 2019 - 2020 The Matrix.org Foundation C.I.C. diff --git a/synapse/handlers/cas_handler.py b/synapse/handlers/cas_handler.py index 5060936f94..7346ccfe93 100644 --- a/synapse/handlers/cas_handler.py +++ b/synapse/handlers/cas_handler.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/deactivate_account.py b/synapse/handlers/deactivate_account.py index 2bcd8f5435..3f6f9f7f3d 100644 --- a/synapse/handlers/deactivate_account.py +++ b/synapse/handlers/deactivate_account.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017, 2018 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. # diff --git a/synapse/handlers/device.py b/synapse/handlers/device.py index 7e76db3e2a..d75edb184b 100644 --- a/synapse/handlers/device.py +++ b/synapse/handlers/device.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2019 New Vector Ltd # Copyright 2019,2020 The Matrix.org Foundation C.I.C. diff --git a/synapse/handlers/devicemessage.py b/synapse/handlers/devicemessage.py index c971eeb4d2..c5d631de07 100644 --- a/synapse/handlers/devicemessage.py +++ b/synapse/handlers/devicemessage.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/directory.py b/synapse/handlers/directory.py index abcf86352d..90932316f3 100644 --- a/synapse/handlers/directory.py +++ b/synapse/handlers/directory.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/e2e_keys.py b/synapse/handlers/e2e_keys.py index 92b18378fc..974487800d 100644 --- a/synapse/handlers/e2e_keys.py +++ b/synapse/handlers/e2e_keys.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2018-2019 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/synapse/handlers/e2e_room_keys.py b/synapse/handlers/e2e_room_keys.py index a910d246d6..31742236a9 100644 --- a/synapse/handlers/e2e_room_keys.py +++ b/synapse/handlers/e2e_room_keys.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017, 2018 New Vector Ltd # Copyright 2019 Matrix.org Foundation C.I.C. # diff --git a/synapse/handlers/events.py b/synapse/handlers/events.py index f46cab7325..d82144d7fa 100644 --- a/synapse/handlers/events.py +++ b/synapse/handlers/events.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/federation.py b/synapse/handlers/federation.py index 67888898ff..fe1d83f6b8 100644 --- a/synapse/handlers/federation.py +++ b/synapse/handlers/federation.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017-2018 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/synapse/handlers/groups_local.py b/synapse/handlers/groups_local.py index a41ca5df9c..157f2ff218 100644 --- a/synapse/handlers/groups_local.py +++ b/synapse/handlers/groups_local.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/handlers/identity.py b/synapse/handlers/identity.py index d89fa5fb30..87a8b89237 100644 --- a/synapse/handlers/identity.py +++ b/synapse/handlers/identity.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # Copyright 2018 New Vector Ltd diff --git a/synapse/handlers/initial_sync.py b/synapse/handlers/initial_sync.py index 13f8152283..76242865ae 100644 --- a/synapse/handlers/initial_sync.py +++ b/synapse/handlers/initial_sync.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/message.py b/synapse/handlers/message.py index 125dae6d25..ec8eb21674 100644 --- a/synapse/handlers/message.py +++ b/synapse/handlers/message.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017-2018 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/synapse/handlers/oidc_handler.py b/synapse/handlers/oidc_handler.py index 6624212d6f..b156196a70 100644 --- a/synapse/handlers/oidc_handler.py +++ b/synapse/handlers/oidc_handler.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Quentin Gliech # Copyright 2021 The Matrix.org Foundation C.I.C. # diff --git a/synapse/handlers/pagination.py b/synapse/handlers/pagination.py index 66dc886c81..1e1186c29e 100644 --- a/synapse/handlers/pagination.py +++ b/synapse/handlers/pagination.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # Copyright 2017 - 2018 New Vector Ltd # diff --git a/synapse/handlers/password_policy.py b/synapse/handlers/password_policy.py index 92cefa11aa..cd21efdcc6 100644 --- a/synapse/handlers/password_policy.py +++ b/synapse/handlers/password_policy.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. # diff --git a/synapse/handlers/presence.py b/synapse/handlers/presence.py index 0047907cd9..251b48148d 100644 --- a/synapse/handlers/presence.py +++ b/synapse/handlers/presence.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. # diff --git a/synapse/handlers/profile.py b/synapse/handlers/profile.py index a755363c3f..05b4a97b59 100644 --- a/synapse/handlers/profile.py +++ b/synapse/handlers/profile.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/read_marker.py b/synapse/handlers/read_marker.py index a54fe1968e..c679a8303e 100644 --- a/synapse/handlers/read_marker.py +++ b/synapse/handlers/read_marker.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/receipts.py b/synapse/handlers/receipts.py index dbfe9bfaca..f782d9db32 100644 --- a/synapse/handlers/receipts.py +++ b/synapse/handlers/receipts.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/register.py b/synapse/handlers/register.py index 3b6660c873..007fb12840 100644 --- a/synapse/handlers/register.py +++ b/synapse/handlers/register.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/room.py b/synapse/handlers/room.py index 4b3d0d72e3..5a888b7941 100644 --- a/synapse/handlers/room.py +++ b/synapse/handlers/room.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # Copyright 2018-2019 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/synapse/handlers/room_list.py b/synapse/handlers/room_list.py index 924b81db7c..141c9c0444 100644 --- a/synapse/handlers/room_list.py +++ b/synapse/handlers/room_list.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/room_member.py b/synapse/handlers/room_member.py index 894ef859f4..2bbfac6471 100644 --- a/synapse/handlers/room_member.py +++ b/synapse/handlers/room_member.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016-2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/room_member_worker.py b/synapse/handlers/room_member_worker.py index 3a90fc0c16..3e89dd2315 100644 --- a/synapse/handlers/room_member_worker.py +++ b/synapse/handlers/room_member_worker.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/saml_handler.py b/synapse/handlers/saml_handler.py index ec2ba11c75..80ba65b9e0 100644 --- a/synapse/handlers/saml_handler.py +++ b/synapse/handlers/saml_handler.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/search.py b/synapse/handlers/search.py index d742dfbd53..4e718d3f63 100644 --- a/synapse/handlers/search.py +++ b/synapse/handlers/search.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/set_password.py b/synapse/handlers/set_password.py index f98a338ec5..a63fac8283 100644 --- a/synapse/handlers/set_password.py +++ b/synapse/handlers/set_password.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/space_summary.py b/synapse/handlers/space_summary.py index 5d9418969d..01e3e050f9 100644 --- a/synapse/handlers/space_summary.py +++ b/synapse/handlers/space_summary.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/sso.py b/synapse/handlers/sso.py index 415b1c2d17..8d00ffdc73 100644 --- a/synapse/handlers/sso.py +++ b/synapse/handlers/sso.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/state_deltas.py b/synapse/handlers/state_deltas.py index ee8f87e59a..077c7c0649 100644 --- a/synapse/handlers/state_deltas.py +++ b/synapse/handlers/state_deltas.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/stats.py b/synapse/handlers/stats.py index 8730f99d03..383e34026e 100644 --- a/synapse/handlers/stats.py +++ b/synapse/handlers/stats.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/sync.py b/synapse/handlers/sync.py index f8d88ef77b..dc8ee8cd17 100644 --- a/synapse/handlers/sync.py +++ b/synapse/handlers/sync.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2018, 2019 New Vector Ltd # diff --git a/synapse/handlers/typing.py b/synapse/handlers/typing.py index bb35af099d..e22393adc4 100644 --- a/synapse/handlers/typing.py +++ b/synapse/handlers/typing.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/ui_auth/__init__.py b/synapse/handlers/ui_auth/__init__.py index a68d5e790e..4c3b669fae 100644 --- a/synapse/handlers/ui_auth/__init__.py +++ b/synapse/handlers/ui_auth/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/ui_auth/checkers.py b/synapse/handlers/ui_auth/checkers.py index 3d66bf305e..0eeb7c03f2 100644 --- a/synapse/handlers/ui_auth/checkers.py +++ b/synapse/handlers/ui_auth/checkers.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/handlers/user_directory.py b/synapse/handlers/user_directory.py index b121286d95..9b1e6d5c18 100644 --- a/synapse/handlers/user_directory.py +++ b/synapse/handlers/user_directory.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/http/__init__.py b/synapse/http/__init__.py index 142b007d01..ed4671b7de 100644 --- a/synapse/http/__init__.py +++ b/synapse/http/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/http/additional_resource.py b/synapse/http/additional_resource.py index 479746c9c5..55ea97a07f 100644 --- a/synapse/http/additional_resource.py +++ b/synapse/http/additional_resource.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/http/client.py b/synapse/http/client.py index f7a07f0466..1730187ffa 100644 --- a/synapse/http/client.py +++ b/synapse/http/client.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/http/connectproxyclient.py b/synapse/http/connectproxyclient.py index b797e3ce80..17e1c5abb1 100644 --- a/synapse/http/connectproxyclient.py +++ b/synapse/http/connectproxyclient.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/http/federation/__init__.py b/synapse/http/federation/__init__.py index 1453d04571..743fb9904a 100644 --- a/synapse/http/federation/__init__.py +++ b/synapse/http/federation/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/http/federation/matrix_federation_agent.py b/synapse/http/federation/matrix_federation_agent.py index 5935a125fd..950770201a 100644 --- a/synapse/http/federation/matrix_federation_agent.py +++ b/synapse/http/federation/matrix_federation_agent.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/http/federation/srv_resolver.py b/synapse/http/federation/srv_resolver.py index d9620032d2..b8ed4ec905 100644 --- a/synapse/http/federation/srv_resolver.py +++ b/synapse/http/federation/srv_resolver.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2019 New Vector Ltd # diff --git a/synapse/http/federation/well_known_resolver.py b/synapse/http/federation/well_known_resolver.py index ce4079f15c..20d39a4ea6 100644 --- a/synapse/http/federation/well_known_resolver.py +++ b/synapse/http/federation/well_known_resolver.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/http/matrixfederationclient.py b/synapse/http/matrixfederationclient.py index ab47dec8f2..d48721a4e2 100644 --- a/synapse/http/matrixfederationclient.py +++ b/synapse/http/matrixfederationclient.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/http/proxyagent.py b/synapse/http/proxyagent.py index ea5ad14cb0..7dfae8b786 100644 --- a/synapse/http/proxyagent.py +++ b/synapse/http/proxyagent.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/http/request_metrics.py b/synapse/http/request_metrics.py index 0ec5d941b8..602f93c497 100644 --- a/synapse/http/request_metrics.py +++ b/synapse/http/request_metrics.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/http/server.py b/synapse/http/server.py index fa89260850..845651e606 100644 --- a/synapse/http/server.py +++ b/synapse/http/server.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/http/servlet.py b/synapse/http/servlet.py index 0e637f4701..31897546a9 100644 --- a/synapse/http/servlet.py +++ b/synapse/http/servlet.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/logging/__init__.py b/synapse/logging/__init__.py index b28b7b2ef7..e00969f8b1 100644 --- a/synapse/logging/__init__.py +++ b/synapse/logging/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/logging/_remote.py b/synapse/logging/_remote.py index 643492ceaf..4e8b0f8d10 100644 --- a/synapse/logging/_remote.py +++ b/synapse/logging/_remote.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/logging/_structured.py b/synapse/logging/_structured.py index 3e054f615c..c7a971a9d6 100644 --- a/synapse/logging/_structured.py +++ b/synapse/logging/_structured.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/logging/_terse_json.py b/synapse/logging/_terse_json.py index 2fbf5549a1..8002a250a2 100644 --- a/synapse/logging/_terse_json.py +++ b/synapse/logging/_terse_json.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/logging/filter.py b/synapse/logging/filter.py index 1baf8dd679..ed51a4726c 100644 --- a/synapse/logging/filter.py +++ b/synapse/logging/filter.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/logging/formatter.py b/synapse/logging/formatter.py index 11f60a77f7..c0f12ecd15 100644 --- a/synapse/logging/formatter.py +++ b/synapse/logging/formatter.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/logging/opentracing.py b/synapse/logging/opentracing.py index bfe9136fd8..fba2fa3904 100644 --- a/synapse/logging/opentracing.py +++ b/synapse/logging/opentracing.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/logging/scopecontextmanager.py b/synapse/logging/scopecontextmanager.py index 7b9c657456..b1e8e08fe9 100644 --- a/synapse/logging/scopecontextmanager.py +++ b/synapse/logging/scopecontextmanager.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/logging/utils.py b/synapse/logging/utils.py index fd3543ab04..08895e72ee 100644 --- a/synapse/logging/utils.py +++ b/synapse/logging/utils.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/metrics/__init__.py b/synapse/metrics/__init__.py index 13a5bc4558..31b7b3c256 100644 --- a/synapse/metrics/__init__.py +++ b/synapse/metrics/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/metrics/_exposition.py b/synapse/metrics/_exposition.py index 71320a1402..8002be56e0 100644 --- a/synapse/metrics/_exposition.py +++ b/synapse/metrics/_exposition.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015-2019 Prometheus Python Client Developers # Copyright 2019 Matrix.org Foundation C.I.C. # diff --git a/synapse/metrics/background_process_metrics.py b/synapse/metrics/background_process_metrics.py index e8a9096c03..cbd0894e57 100644 --- a/synapse/metrics/background_process_metrics.py +++ b/synapse/metrics/background_process_metrics.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/module_api/__init__.py b/synapse/module_api/__init__.py index ca1bd4cdc9..b7dbbfc27c 100644 --- a/synapse/module_api/__init__.py +++ b/synapse/module_api/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. # diff --git a/synapse/module_api/errors.py b/synapse/module_api/errors.py index b15441772c..d24864c549 100644 --- a/synapse/module_api/errors.py +++ b/synapse/module_api/errors.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/notifier.py b/synapse/notifier.py index 7ce34380af..d5ab77058d 100644 --- a/synapse/notifier.py +++ b/synapse/notifier.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/push/__init__.py b/synapse/push/__init__.py index 9fc3da49a2..2c23afe8e3 100644 --- a/synapse/push/__init__.py +++ b/synapse/push/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/push/action_generator.py b/synapse/push/action_generator.py index 38a47a600f..60758df016 100644 --- a/synapse/push/action_generator.py +++ b/synapse/push/action_generator.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/push/bulk_push_rule_evaluator.py b/synapse/push/bulk_push_rule_evaluator.py index 1897f59153..50b470c310 100644 --- a/synapse/push/bulk_push_rule_evaluator.py +++ b/synapse/push/bulk_push_rule_evaluator.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015 OpenMarket Ltd # Copyright 2017 New Vector Ltd # diff --git a/synapse/push/clientformat.py b/synapse/push/clientformat.py index 0cadba761a..2ee0ccd58a 100644 --- a/synapse/push/clientformat.py +++ b/synapse/push/clientformat.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/push/emailpusher.py b/synapse/push/emailpusher.py index c0968dc7a1..cd89b54305 100644 --- a/synapse/push/emailpusher.py +++ b/synapse/push/emailpusher.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/push/httppusher.py b/synapse/push/httppusher.py index 26af5309c1..06bf5f8ada 100644 --- a/synapse/push/httppusher.py +++ b/synapse/push/httppusher.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2017 New Vector Ltd # diff --git a/synapse/push/mailer.py b/synapse/push/mailer.py index 2e5161de2c..c4b43b0d3f 100644 --- a/synapse/push/mailer.py +++ b/synapse/push/mailer.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/push/presentable_names.py b/synapse/push/presentable_names.py index 04c2c1482c..412941393f 100644 --- a/synapse/push/presentable_names.py +++ b/synapse/push/presentable_names.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/push/push_rule_evaluator.py b/synapse/push/push_rule_evaluator.py index ba1877adcd..49ecb38522 100644 --- a/synapse/push/push_rule_evaluator.py +++ b/synapse/push/push_rule_evaluator.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2017 New Vector Ltd # diff --git a/synapse/push/push_tools.py b/synapse/push/push_tools.py index df34103224..9c85200c0f 100644 --- a/synapse/push/push_tools.py +++ b/synapse/push/push_tools.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/push/pusher.py b/synapse/push/pusher.py index cb94127850..c51938b8cf 100644 --- a/synapse/push/pusher.py +++ b/synapse/push/pusher.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/push/pusherpool.py b/synapse/push/pusherpool.py index 4c7f5fecee..564a5ed0df 100644 --- a/synapse/push/pusherpool.py +++ b/synapse/push/pusherpool.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/__init__.py b/synapse/replication/__init__.py index b7df13c9ee..f43a360a80 100644 --- a/synapse/replication/__init__.py +++ b/synapse/replication/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/http/__init__.py b/synapse/replication/http/__init__.py index cb4a52dbe9..ba8114ac9e 100644 --- a/synapse/replication/http/__init__.py +++ b/synapse/replication/http/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/http/_base.py b/synapse/replication/http/_base.py index b7aa0c280f..ece03467b5 100644 --- a/synapse/replication/http/_base.py +++ b/synapse/replication/http/_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/http/account_data.py b/synapse/replication/http/account_data.py index 60899b6ad6..70e951af63 100644 --- a/synapse/replication/http/account_data.py +++ b/synapse/replication/http/account_data.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/http/devices.py b/synapse/replication/http/devices.py index 807b85d2e1..5a5818ef61 100644 --- a/synapse/replication/http/devices.py +++ b/synapse/replication/http/devices.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/http/federation.py b/synapse/replication/http/federation.py index 82ea3b895f..79cadb7b57 100644 --- a/synapse/replication/http/federation.py +++ b/synapse/replication/http/federation.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/http/login.py b/synapse/replication/http/login.py index 4ec1bfa6ea..c2e8c00293 100644 --- a/synapse/replication/http/login.py +++ b/synapse/replication/http/login.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/http/membership.py b/synapse/replication/http/membership.py index c10992ff51..289a397d68 100644 --- a/synapse/replication/http/membership.py +++ b/synapse/replication/http/membership.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/http/presence.py b/synapse/replication/http/presence.py index bc9aa82cb4..f25307620d 100644 --- a/synapse/replication/http/presence.py +++ b/synapse/replication/http/presence.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/http/push.py b/synapse/replication/http/push.py index 054ed64d34..139427cb1f 100644 --- a/synapse/replication/http/push.py +++ b/synapse/replication/http/push.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/http/register.py b/synapse/replication/http/register.py index 73d7477854..d6dd7242eb 100644 --- a/synapse/replication/http/register.py +++ b/synapse/replication/http/register.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/http/send_event.py b/synapse/replication/http/send_event.py index a4c5b44292..fae5ffa451 100644 --- a/synapse/replication/http/send_event.py +++ b/synapse/replication/http/send_event.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/http/streams.py b/synapse/replication/http/streams.py index 309159e304..9afa147d00 100644 --- a/synapse/replication/http/streams.py +++ b/synapse/replication/http/streams.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/__init__.py b/synapse/replication/slave/__init__.py index b7df13c9ee..f43a360a80 100644 --- a/synapse/replication/slave/__init__.py +++ b/synapse/replication/slave/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/storage/__init__.py b/synapse/replication/slave/storage/__init__.py index b7df13c9ee..f43a360a80 100644 --- a/synapse/replication/slave/storage/__init__.py +++ b/synapse/replication/slave/storage/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/storage/_base.py b/synapse/replication/slave/storage/_base.py index 693c9ab901..faa99387a7 100644 --- a/synapse/replication/slave/storage/_base.py +++ b/synapse/replication/slave/storage/_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/storage/_slaved_id_tracker.py b/synapse/replication/slave/storage/_slaved_id_tracker.py index 0d39a93ed2..2cb7489047 100644 --- a/synapse/replication/slave/storage/_slaved_id_tracker.py +++ b/synapse/replication/slave/storage/_slaved_id_tracker.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/storage/account_data.py b/synapse/replication/slave/storage/account_data.py index 21afe5f155..ee74ee7d85 100644 --- a/synapse/replication/slave/storage/account_data.py +++ b/synapse/replication/slave/storage/account_data.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/replication/slave/storage/appservice.py b/synapse/replication/slave/storage/appservice.py index 0f8d7037bd..29f50c0add 100644 --- a/synapse/replication/slave/storage/appservice.py +++ b/synapse/replication/slave/storage/appservice.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/replication/slave/storage/client_ips.py b/synapse/replication/slave/storage/client_ips.py index 0f5b7adef7..8730966380 100644 --- a/synapse/replication/slave/storage/client_ips.py +++ b/synapse/replication/slave/storage/client_ips.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/storage/deviceinbox.py b/synapse/replication/slave/storage/deviceinbox.py index 1260f6d141..e940751084 100644 --- a/synapse/replication/slave/storage/deviceinbox.py +++ b/synapse/replication/slave/storage/deviceinbox.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/storage/devices.py b/synapse/replication/slave/storage/devices.py index e0d86240dd..70207420a6 100644 --- a/synapse/replication/slave/storage/devices.py +++ b/synapse/replication/slave/storage/devices.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/storage/directory.py b/synapse/replication/slave/storage/directory.py index 1945bcf9a8..71fde0c96c 100644 --- a/synapse/replication/slave/storage/directory.py +++ b/synapse/replication/slave/storage/directory.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/storage/events.py b/synapse/replication/slave/storage/events.py index fbffe6d85c..d4d3f8c448 100644 --- a/synapse/replication/slave/storage/events.py +++ b/synapse/replication/slave/storage/events.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/replication/slave/storage/filtering.py b/synapse/replication/slave/storage/filtering.py index 6a23252861..37875bc973 100644 --- a/synapse/replication/slave/storage/filtering.py +++ b/synapse/replication/slave/storage/filtering.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/storage/groups.py b/synapse/replication/slave/storage/groups.py index 30955bcbfe..e9bdc38470 100644 --- a/synapse/replication/slave/storage/groups.py +++ b/synapse/replication/slave/storage/groups.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/storage/keys.py b/synapse/replication/slave/storage/keys.py index 961579751c..a00b38c512 100644 --- a/synapse/replication/slave/storage/keys.py +++ b/synapse/replication/slave/storage/keys.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/storage/presence.py b/synapse/replication/slave/storage/presence.py index 55620c03d8..57327d910d 100644 --- a/synapse/replication/slave/storage/presence.py +++ b/synapse/replication/slave/storage/presence.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/storage/profile.py b/synapse/replication/slave/storage/profile.py index f85b20a071..99f4a22642 100644 --- a/synapse/replication/slave/storage/profile.py +++ b/synapse/replication/slave/storage/profile.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/storage/push_rule.py b/synapse/replication/slave/storage/push_rule.py index de904c943c..4d5f862862 100644 --- a/synapse/replication/slave/storage/push_rule.py +++ b/synapse/replication/slave/storage/push_rule.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/replication/slave/storage/pushers.py b/synapse/replication/slave/storage/pushers.py index 93161c3dfb..2672a2c94b 100644 --- a/synapse/replication/slave/storage/pushers.py +++ b/synapse/replication/slave/storage/pushers.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/replication/slave/storage/receipts.py b/synapse/replication/slave/storage/receipts.py index 3dfdd9961d..3826b87dec 100644 --- a/synapse/replication/slave/storage/receipts.py +++ b/synapse/replication/slave/storage/receipts.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/replication/slave/storage/registration.py b/synapse/replication/slave/storage/registration.py index a40f064e2b..5dae35a960 100644 --- a/synapse/replication/slave/storage/registration.py +++ b/synapse/replication/slave/storage/registration.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/storage/room.py b/synapse/replication/slave/storage/room.py index 109ac6bea1..8cc6de3f46 100644 --- a/synapse/replication/slave/storage/room.py +++ b/synapse/replication/slave/storage/room.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/slave/storage/transactions.py b/synapse/replication/slave/storage/transactions.py index 2091ac0df6..a59e543924 100644 --- a/synapse/replication/slave/storage/transactions.py +++ b/synapse/replication/slave/storage/transactions.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/tcp/__init__.py b/synapse/replication/tcp/__init__.py index 1b8718b11d..1fa60af8e6 100644 --- a/synapse/replication/tcp/__init__.py +++ b/synapse/replication/tcp/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/tcp/client.py b/synapse/replication/tcp/client.py index 3455839d67..ced69ee904 100644 --- a/synapse/replication/tcp/client.py +++ b/synapse/replication/tcp/client.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/tcp/commands.py b/synapse/replication/tcp/commands.py index 8abed1f52d..505d450e19 100644 --- a/synapse/replication/tcp/commands.py +++ b/synapse/replication/tcp/commands.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/tcp/external_cache.py b/synapse/replication/tcp/external_cache.py index d89a36f25a..1a3b051e3c 100644 --- a/synapse/replication/tcp/external_cache.py +++ b/synapse/replication/tcp/external_cache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/tcp/handler.py b/synapse/replication/tcp/handler.py index a8894beadf..2ce1b9f222 100644 --- a/synapse/replication/tcp/handler.py +++ b/synapse/replication/tcp/handler.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. # diff --git a/synapse/replication/tcp/protocol.py b/synapse/replication/tcp/protocol.py index d10d574246..6860576e78 100644 --- a/synapse/replication/tcp/protocol.py +++ b/synapse/replication/tcp/protocol.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/tcp/redis.py b/synapse/replication/tcp/redis.py index 98bdeb0ec6..6a2c2655e4 100644 --- a/synapse/replication/tcp/redis.py +++ b/synapse/replication/tcp/redis.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/tcp/resource.py b/synapse/replication/tcp/resource.py index 2018f9f29e..bd47d84258 100644 --- a/synapse/replication/tcp/resource.py +++ b/synapse/replication/tcp/resource.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/replication/tcp/streams/__init__.py b/synapse/replication/tcp/streams/__init__.py index d1a61c3314..fb74ac4e98 100644 --- a/synapse/replication/tcp/streams/__init__.py +++ b/synapse/replication/tcp/streams/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # Copyright 2019 New Vector Ltd # diff --git a/synapse/replication/tcp/streams/_base.py b/synapse/replication/tcp/streams/_base.py index 3dfee76743..520c45f151 100644 --- a/synapse/replication/tcp/streams/_base.py +++ b/synapse/replication/tcp/streams/_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # Copyright 2019 New Vector Ltd # diff --git a/synapse/replication/tcp/streams/events.py b/synapse/replication/tcp/streams/events.py index fa5e37ba7b..e7e87bac92 100644 --- a/synapse/replication/tcp/streams/events.py +++ b/synapse/replication/tcp/streams/events.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # Copyright 2019 New Vector Ltd # diff --git a/synapse/replication/tcp/streams/federation.py b/synapse/replication/tcp/streams/federation.py index 9bb8e9e177..096a85d363 100644 --- a/synapse/replication/tcp/streams/federation.py +++ b/synapse/replication/tcp/streams/federation.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # Copyright 2019 New Vector Ltd # diff --git a/synapse/rest/__init__.py b/synapse/rest/__init__.py index 40f5c32db2..79d52d2dcb 100644 --- a/synapse/rest/__init__.py +++ b/synapse/rest/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/rest/admin/__init__.py b/synapse/rest/admin/__init__.py index 2dec818a5f..9cb9a9f6aa 100644 --- a/synapse/rest/admin/__init__.py +++ b/synapse/rest/admin/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018-2019 New Vector Ltd # Copyright 2020, 2021 The Matrix.org Foundation C.I.C. diff --git a/synapse/rest/admin/_base.py b/synapse/rest/admin/_base.py index 7681e55b58..f203f6fdc6 100644 --- a/synapse/rest/admin/_base.py +++ b/synapse/rest/admin/_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/admin/devices.py b/synapse/rest/admin/devices.py index 5996de11c3..5715190a78 100644 --- a/synapse/rest/admin/devices.py +++ b/synapse/rest/admin/devices.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Dirk Klimpel # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/admin/event_reports.py b/synapse/rest/admin/event_reports.py index 381c3fe685..bbfcaf723b 100644 --- a/synapse/rest/admin/event_reports.py +++ b/synapse/rest/admin/event_reports.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Dirk Klimpel # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/admin/groups.py b/synapse/rest/admin/groups.py index ebc587aa06..3b3ffde0b6 100644 --- a/synapse/rest/admin/groups.py +++ b/synapse/rest/admin/groups.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/admin/media.py b/synapse/rest/admin/media.py index 40646ef241..24dd46113a 100644 --- a/synapse/rest/admin/media.py +++ b/synapse/rest/admin/media.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018-2019 New Vector Ltd # diff --git a/synapse/rest/admin/purge_room_servlet.py b/synapse/rest/admin/purge_room_servlet.py index 49966ee3e0..2365ff7a0f 100644 --- a/synapse/rest/admin/purge_room_servlet.py +++ b/synapse/rest/admin/purge_room_servlet.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/admin/rooms.py b/synapse/rest/admin/rooms.py index cfe1bebb91..d0cf121743 100644 --- a/synapse/rest/admin/rooms.py +++ b/synapse/rest/admin/rooms.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019-2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/admin/server_notice_servlet.py b/synapse/rest/admin/server_notice_servlet.py index f495666f4a..cc3ab5854b 100644 --- a/synapse/rest/admin/server_notice_servlet.py +++ b/synapse/rest/admin/server_notice_servlet.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/admin/statistics.py b/synapse/rest/admin/statistics.py index f2490e382d..948de94ccd 100644 --- a/synapse/rest/admin/statistics.py +++ b/synapse/rest/admin/statistics.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Dirk Klimpel # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/admin/users.py b/synapse/rest/admin/users.py index 04990c71fb..edda7861fa 100644 --- a/synapse/rest/admin/users.py +++ b/synapse/rest/admin/users.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/__init__.py b/synapse/rest/client/__init__.py index fe0ac3f8e9..629e2df74a 100644 --- a/synapse/rest/client/__init__.py +++ b/synapse/rest/client/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/transactions.py b/synapse/rest/client/transactions.py index 7be5c0fb88..94ff3719ce 100644 --- a/synapse/rest/client/transactions.py +++ b/synapse/rest/client/transactions.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v1/__init__.py b/synapse/rest/client/v1/__init__.py index bfebb0f644..5e83dba2ed 100644 --- a/synapse/rest/client/v1/__init__.py +++ b/synapse/rest/client/v1/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v1/directory.py b/synapse/rest/client/v1/directory.py index e5af26b176..ae92a3df8e 100644 --- a/synapse/rest/client/v1/directory.py +++ b/synapse/rest/client/v1/directory.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v1/events.py b/synapse/rest/client/v1/events.py index 6de4078290..ee7454996e 100644 --- a/synapse/rest/client/v1/events.py +++ b/synapse/rest/client/v1/events.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v1/initial_sync.py b/synapse/rest/client/v1/initial_sync.py index 91da0ee573..bef1edc838 100644 --- a/synapse/rest/client/v1/initial_sync.py +++ b/synapse/rest/client/v1/initial_sync.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v1/login.py b/synapse/rest/client/v1/login.py index 3151e72d4f..42e709ec14 100644 --- a/synapse/rest/client/v1/login.py +++ b/synapse/rest/client/v1/login.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v1/logout.py b/synapse/rest/client/v1/logout.py index ad8cea49c6..5aa7908d73 100644 --- a/synapse/rest/client/v1/logout.py +++ b/synapse/rest/client/v1/logout.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v1/presence.py b/synapse/rest/client/v1/presence.py index 23a529f8e3..c232484f29 100644 --- a/synapse/rest/client/v1/presence.py +++ b/synapse/rest/client/v1/presence.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v1/profile.py b/synapse/rest/client/v1/profile.py index 717c5f2b10..f42f4b3567 100644 --- a/synapse/rest/client/v1/profile.py +++ b/synapse/rest/client/v1/profile.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v1/push_rule.py b/synapse/rest/client/v1/push_rule.py index 241e535917..be29a0b39e 100644 --- a/synapse/rest/client/v1/push_rule.py +++ b/synapse/rest/client/v1/push_rule.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v1/pusher.py b/synapse/rest/client/v1/pusher.py index 0c148a213d..18102eca6c 100644 --- a/synapse/rest/client/v1/pusher.py +++ b/synapse/rest/client/v1/pusher.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v1/room.py b/synapse/rest/client/v1/room.py index 525efdf221..5cab4d3c7b 100644 --- a/synapse/rest/client/v1/room.py +++ b/synapse/rest/client/v1/room.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/rest/client/v1/voip.py b/synapse/rest/client/v1/voip.py index d07ca2c47c..c780ffded5 100644 --- a/synapse/rest/client/v1/voip.py +++ b/synapse/rest/client/v1/voip.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/__init__.py b/synapse/rest/client/v2_alpha/__init__.py index bfebb0f644..5e83dba2ed 100644 --- a/synapse/rest/client/v2_alpha/__init__.py +++ b/synapse/rest/client/v2_alpha/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/_base.py b/synapse/rest/client/v2_alpha/_base.py index f016b4f1bd..0443f4571c 100644 --- a/synapse/rest/client/v2_alpha/_base.py +++ b/synapse/rest/client/v2_alpha/_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/account.py b/synapse/rest/client/v2_alpha/account.py index 411fb57c47..3aad15132d 100644 --- a/synapse/rest/client/v2_alpha/account.py +++ b/synapse/rest/client/v2_alpha/account.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # Copyright 2018 New Vector Ltd diff --git a/synapse/rest/client/v2_alpha/account_data.py b/synapse/rest/client/v2_alpha/account_data.py index 3f28c0bc3e..7517e9304e 100644 --- a/synapse/rest/client/v2_alpha/account_data.py +++ b/synapse/rest/client/v2_alpha/account_data.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/account_validity.py b/synapse/rest/client/v2_alpha/account_validity.py index bd7f9ae203..0ad07fb895 100644 --- a/synapse/rest/client/v2_alpha/account_validity.py +++ b/synapse/rest/client/v2_alpha/account_validity.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/auth.py b/synapse/rest/client/v2_alpha/auth.py index 75ece1c911..6ea1b50a62 100644 --- a/synapse/rest/client/v2_alpha/auth.py +++ b/synapse/rest/client/v2_alpha/auth.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/capabilities.py b/synapse/rest/client/v2_alpha/capabilities.py index 44ccf10ed4..6a24021484 100644 --- a/synapse/rest/client/v2_alpha/capabilities.py +++ b/synapse/rest/client/v2_alpha/capabilities.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/devices.py b/synapse/rest/client/v2_alpha/devices.py index 3d07aadd39..9af05f9b11 100644 --- a/synapse/rest/client/v2_alpha/devices.py +++ b/synapse/rest/client/v2_alpha/devices.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. # diff --git a/synapse/rest/client/v2_alpha/filter.py b/synapse/rest/client/v2_alpha/filter.py index 7cc692643b..411667a9c8 100644 --- a/synapse/rest/client/v2_alpha/filter.py +++ b/synapse/rest/client/v2_alpha/filter.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/groups.py b/synapse/rest/client/v2_alpha/groups.py index 08fb6b2b06..6285680c00 100644 --- a/synapse/rest/client/v2_alpha/groups.py +++ b/synapse/rest/client/v2_alpha/groups.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/rest/client/v2_alpha/keys.py b/synapse/rest/client/v2_alpha/keys.py index f092e5b3a2..a57ccbb5e5 100644 --- a/synapse/rest/client/v2_alpha/keys.py +++ b/synapse/rest/client/v2_alpha/keys.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2019 New Vector Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. diff --git a/synapse/rest/client/v2_alpha/notifications.py b/synapse/rest/client/v2_alpha/notifications.py index 87063ec8b1..0ede643c2d 100644 --- a/synapse/rest/client/v2_alpha/notifications.py +++ b/synapse/rest/client/v2_alpha/notifications.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/openid.py b/synapse/rest/client/v2_alpha/openid.py index 5b996e2d63..d3322acc38 100644 --- a/synapse/rest/client/v2_alpha/openid.py +++ b/synapse/rest/client/v2_alpha/openid.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/password_policy.py b/synapse/rest/client/v2_alpha/password_policy.py index 68b27ff23a..a83927aee6 100644 --- a/synapse/rest/client/v2_alpha/password_policy.py +++ b/synapse/rest/client/v2_alpha/password_policy.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/read_marker.py b/synapse/rest/client/v2_alpha/read_marker.py index 55c6688f52..5988fa47e5 100644 --- a/synapse/rest/client/v2_alpha/read_marker.py +++ b/synapse/rest/client/v2_alpha/read_marker.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/receipts.py b/synapse/rest/client/v2_alpha/receipts.py index 6f7246a394..8cf4aebdbe 100644 --- a/synapse/rest/client/v2_alpha/receipts.py +++ b/synapse/rest/client/v2_alpha/receipts.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/register.py b/synapse/rest/client/v2_alpha/register.py index 4a064849c1..b26aad7b34 100644 --- a/synapse/rest/client/v2_alpha/register.py +++ b/synapse/rest/client/v2_alpha/register.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015 - 2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # diff --git a/synapse/rest/client/v2_alpha/relations.py b/synapse/rest/client/v2_alpha/relations.py index fe765da23c..c7da6759db 100644 --- a/synapse/rest/client/v2_alpha/relations.py +++ b/synapse/rest/client/v2_alpha/relations.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/report_event.py b/synapse/rest/client/v2_alpha/report_event.py index 215d619ca1..2c169abbf3 100644 --- a/synapse/rest/client/v2_alpha/report_event.py +++ b/synapse/rest/client/v2_alpha/report_event.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/room_keys.py b/synapse/rest/client/v2_alpha/room_keys.py index 53de97923f..263596be86 100644 --- a/synapse/rest/client/v2_alpha/room_keys.py +++ b/synapse/rest/client/v2_alpha/room_keys.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017, 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/room_upgrade_rest_servlet.py b/synapse/rest/client/v2_alpha/room_upgrade_rest_servlet.py index 147920767f..6d1b083acb 100644 --- a/synapse/rest/client/v2_alpha/room_upgrade_rest_servlet.py +++ b/synapse/rest/client/v2_alpha/room_upgrade_rest_servlet.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/sendtodevice.py b/synapse/rest/client/v2_alpha/sendtodevice.py index 79c1b526ee..f8dcee603c 100644 --- a/synapse/rest/client/v2_alpha/sendtodevice.py +++ b/synapse/rest/client/v2_alpha/sendtodevice.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/shared_rooms.py b/synapse/rest/client/v2_alpha/shared_rooms.py index c866d5151c..d2e7f04b40 100644 --- a/synapse/rest/client/v2_alpha/shared_rooms.py +++ b/synapse/rest/client/v2_alpha/shared_rooms.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Half-Shot # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/sync.py b/synapse/rest/client/v2_alpha/sync.py index 3481770c83..95ee3f1b84 100644 --- a/synapse/rest/client/v2_alpha/sync.py +++ b/synapse/rest/client/v2_alpha/sync.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/tags.py b/synapse/rest/client/v2_alpha/tags.py index a97cd66c52..c14f83be18 100644 --- a/synapse/rest/client/v2_alpha/tags.py +++ b/synapse/rest/client/v2_alpha/tags.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/thirdparty.py b/synapse/rest/client/v2_alpha/thirdparty.py index 0c127a1b5f..b5c67c9bb6 100644 --- a/synapse/rest/client/v2_alpha/thirdparty.py +++ b/synapse/rest/client/v2_alpha/thirdparty.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/tokenrefresh.py b/synapse/rest/client/v2_alpha/tokenrefresh.py index 79317c74ba..b2f858545c 100644 --- a/synapse/rest/client/v2_alpha/tokenrefresh.py +++ b/synapse/rest/client/v2_alpha/tokenrefresh.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/v2_alpha/user_directory.py b/synapse/rest/client/v2_alpha/user_directory.py index ad598cefe0..7e8912f0b9 100644 --- a/synapse/rest/client/v2_alpha/user_directory.py +++ b/synapse/rest/client/v2_alpha/user_directory.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/client/versions.py b/synapse/rest/client/versions.py index 3e3d8839f4..4582c274c7 100644 --- a/synapse/rest/client/versions.py +++ b/synapse/rest/client/versions.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # Copyright 2018-2019 New Vector Ltd diff --git a/synapse/rest/consent/consent_resource.py b/synapse/rest/consent/consent_resource.py index 8b9ef26cf2..c4550d3cf0 100644 --- a/synapse/rest/consent/consent_resource.py +++ b/synapse/rest/consent/consent_resource.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/health.py b/synapse/rest/health.py index 0170950bf3..4487b54abf 100644 --- a/synapse/rest/health.py +++ b/synapse/rest/health.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/key/__init__.py b/synapse/rest/key/__init__.py index fe0ac3f8e9..629e2df74a 100644 --- a/synapse/rest/key/__init__.py +++ b/synapse/rest/key/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/key/v2/__init__.py b/synapse/rest/key/v2/__init__.py index cb5abcf826..c6c63073ea 100644 --- a/synapse/rest/key/v2/__init__.py +++ b/synapse/rest/key/v2/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/key/v2/local_key_resource.py b/synapse/rest/key/v2/local_key_resource.py index d8e8e48c1c..e8dbe240d8 100644 --- a/synapse/rest/key/v2/local_key_resource.py +++ b/synapse/rest/key/v2/local_key_resource.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/media/v1/__init__.py b/synapse/rest/media/v1/__init__.py index 3b8c96e267..d20186bbd0 100644 --- a/synapse/rest/media/v1/__init__.py +++ b/synapse/rest/media/v1/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/media/v1/_base.py b/synapse/rest/media/v1/_base.py index 6366947071..0fb4cd81f1 100644 --- a/synapse/rest/media/v1/_base.py +++ b/synapse/rest/media/v1/_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2019-2021 The Matrix.org Foundation C.I.C. # diff --git a/synapse/rest/media/v1/config_resource.py b/synapse/rest/media/v1/config_resource.py index c41a7ab412..b20c29f007 100644 --- a/synapse/rest/media/v1/config_resource.py +++ b/synapse/rest/media/v1/config_resource.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 Will Hunt # Copyright 2020-2021 The Matrix.org Foundation C.I.C. # diff --git a/synapse/rest/media/v1/download_resource.py b/synapse/rest/media/v1/download_resource.py index 5dadaeaf57..cd2468f9c5 100644 --- a/synapse/rest/media/v1/download_resource.py +++ b/synapse/rest/media/v1/download_resource.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2020-2021 The Matrix.org Foundation C.I.C. # diff --git a/synapse/rest/media/v1/filepath.py b/synapse/rest/media/v1/filepath.py index 7792f26e78..4088e7a059 100644 --- a/synapse/rest/media/v1/filepath.py +++ b/synapse/rest/media/v1/filepath.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2020-2021 The Matrix.org Foundation C.I.C. # diff --git a/synapse/rest/media/v1/media_repository.py b/synapse/rest/media/v1/media_repository.py index 0c041b542d..87e3645ddc 100644 --- a/synapse/rest/media/v1/media_repository.py +++ b/synapse/rest/media/v1/media_repository.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018-2021 The Matrix.org Foundation C.I.C. # diff --git a/synapse/rest/media/v1/media_storage.py b/synapse/rest/media/v1/media_storage.py index b1b1c9e6ec..c7fd97c46c 100644 --- a/synapse/rest/media/v1/media_storage.py +++ b/synapse/rest/media/v1/media_storage.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018-2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/media/v1/preview_url_resource.py b/synapse/rest/media/v1/preview_url_resource.py index 814145a04a..0adfb1a70f 100644 --- a/synapse/rest/media/v1/preview_url_resource.py +++ b/synapse/rest/media/v1/preview_url_resource.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2020-2021 The Matrix.org Foundation C.I.C. # diff --git a/synapse/rest/media/v1/storage_provider.py b/synapse/rest/media/v1/storage_provider.py index 031947557d..0ff6ad3c0c 100644 --- a/synapse/rest/media/v1/storage_provider.py +++ b/synapse/rest/media/v1/storage_provider.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018-2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/media/v1/thumbnail_resource.py b/synapse/rest/media/v1/thumbnail_resource.py index af802bc0b1..a029d426f0 100644 --- a/synapse/rest/media/v1/thumbnail_resource.py +++ b/synapse/rest/media/v1/thumbnail_resource.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2020-2021 The Matrix.org Foundation C.I.C. # diff --git a/synapse/rest/media/v1/thumbnailer.py b/synapse/rest/media/v1/thumbnailer.py index 988f52c78f..37fe582390 100644 --- a/synapse/rest/media/v1/thumbnailer.py +++ b/synapse/rest/media/v1/thumbnailer.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2020-2021 The Matrix.org Foundation C.I.C. # diff --git a/synapse/rest/media/v1/upload_resource.py b/synapse/rest/media/v1/upload_resource.py index 0138b2e2d1..80f017a4dd 100644 --- a/synapse/rest/media/v1/upload_resource.py +++ b/synapse/rest/media/v1/upload_resource.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2020-2021 The Matrix.org Foundation C.I.C. # diff --git a/synapse/rest/synapse/__init__.py b/synapse/rest/synapse/__init__.py index c0b733488b..6ef4fbe8f7 100644 --- a/synapse/rest/synapse/__init__.py +++ b/synapse/rest/synapse/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/synapse/client/__init__.py b/synapse/rest/synapse/client/__init__.py index 9eeb970580..47a2f72b32 100644 --- a/synapse/rest/synapse/client/__init__.py +++ b/synapse/rest/synapse/client/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/synapse/client/new_user_consent.py b/synapse/rest/synapse/client/new_user_consent.py index 78ee0b5e88..e5634f9679 100644 --- a/synapse/rest/synapse/client/new_user_consent.py +++ b/synapse/rest/synapse/client/new_user_consent.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/synapse/client/oidc/__init__.py b/synapse/rest/synapse/client/oidc/__init__.py index 64c0deb75d..36ba401656 100644 --- a/synapse/rest/synapse/client/oidc/__init__.py +++ b/synapse/rest/synapse/client/oidc/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Quentin Gliech # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/synapse/client/oidc/callback_resource.py b/synapse/rest/synapse/client/oidc/callback_resource.py index 1af33f0a45..7785f17e90 100644 --- a/synapse/rest/synapse/client/oidc/callback_resource.py +++ b/synapse/rest/synapse/client/oidc/callback_resource.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Quentin Gliech # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/synapse/client/password_reset.py b/synapse/rest/synapse/client/password_reset.py index d26ce46efc..f2800bf2db 100644 --- a/synapse/rest/synapse/client/password_reset.py +++ b/synapse/rest/synapse/client/password_reset.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/synapse/client/pick_idp.py b/synapse/rest/synapse/client/pick_idp.py index 9550b82998..d3a94a9349 100644 --- a/synapse/rest/synapse/client/pick_idp.py +++ b/synapse/rest/synapse/client/pick_idp.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/synapse/client/pick_username.py b/synapse/rest/synapse/client/pick_username.py index d9ffe84489..9b002cc15e 100644 --- a/synapse/rest/synapse/client/pick_username.py +++ b/synapse/rest/synapse/client/pick_username.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/synapse/client/saml2/__init__.py b/synapse/rest/synapse/client/saml2/__init__.py index 3e8235ee1e..781ccb237c 100644 --- a/synapse/rest/synapse/client/saml2/__init__.py +++ b/synapse/rest/synapse/client/saml2/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/synapse/client/saml2/metadata_resource.py b/synapse/rest/synapse/client/saml2/metadata_resource.py index 1e8526e22e..b37c7083dc 100644 --- a/synapse/rest/synapse/client/saml2/metadata_resource.py +++ b/synapse/rest/synapse/client/saml2/metadata_resource.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/synapse/client/saml2/response_resource.py b/synapse/rest/synapse/client/saml2/response_resource.py index 4dfadf1bfb..774ccd870f 100644 --- a/synapse/rest/synapse/client/saml2/response_resource.py +++ b/synapse/rest/synapse/client/saml2/response_resource.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # Copyright 2018 New Vector Ltd # diff --git a/synapse/rest/synapse/client/sso_register.py b/synapse/rest/synapse/client/sso_register.py index f2acce2437..70cd148a76 100644 --- a/synapse/rest/synapse/client/sso_register.py +++ b/synapse/rest/synapse/client/sso_register.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/rest/well_known.py b/synapse/rest/well_known.py index f591cc6c5c..19ac3af337 100644 --- a/synapse/rest/well_known.py +++ b/synapse/rest/well_known.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/secrets.py b/synapse/secrets.py index 7939db75e7..bf829251fd 100644 --- a/synapse/secrets.py +++ b/synapse/secrets.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/server.py b/synapse/server.py index cfb55c230d..6c35ae6e50 100644 --- a/synapse/server.py +++ b/synapse/server.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017-2018 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/synapse/server_notices/consent_server_notices.py b/synapse/server_notices/consent_server_notices.py index a9349bf9a1..e65f6f88fe 100644 --- a/synapse/server_notices/consent_server_notices.py +++ b/synapse/server_notices/consent_server_notices.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/server_notices/resource_limits_server_notices.py b/synapse/server_notices/resource_limits_server_notices.py index a18a2e76c9..e4b0bc5c72 100644 --- a/synapse/server_notices/resource_limits_server_notices.py +++ b/synapse/server_notices/resource_limits_server_notices.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/server_notices/server_notices_manager.py b/synapse/server_notices/server_notices_manager.py index 144e1da78e..f19075b760 100644 --- a/synapse/server_notices/server_notices_manager.py +++ b/synapse/server_notices/server_notices_manager.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/server_notices/server_notices_sender.py b/synapse/server_notices/server_notices_sender.py index 965c645889..c875b15b32 100644 --- a/synapse/server_notices/server_notices_sender.py +++ b/synapse/server_notices/server_notices_sender.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/server_notices/worker_server_notices_sender.py b/synapse/server_notices/worker_server_notices_sender.py index c76bd57460..cc53318491 100644 --- a/synapse/server_notices/worker_server_notices_sender.py +++ b/synapse/server_notices/worker_server_notices_sender.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/spam_checker_api/__init__.py b/synapse/spam_checker_api/__init__.py index 3ce25bb012..73018f2d00 100644 --- a/synapse/spam_checker_api/__init__.py +++ b/synapse/spam_checker_api/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/state/__init__.py b/synapse/state/__init__.py index c0f79ffdc8..c7ee731154 100644 --- a/synapse/state/__init__.py +++ b/synapse/state/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/state/v1.py b/synapse/state/v1.py index ce255da6fd..318e998813 100644 --- a/synapse/state/v1.py +++ b/synapse/state/v1.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/state/v2.py b/synapse/state/v2.py index e73a548ee4..32671ddbde 100644 --- a/synapse/state/v2.py +++ b/synapse/state/v2.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/__init__.py b/synapse/storage/__init__.py index 0b9007e51f..105e4e1fec 100644 --- a/synapse/storage/__init__.py +++ b/synapse/storage/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018,2019 New Vector Ltd # diff --git a/synapse/storage/_base.py b/synapse/storage/_base.py index 240905329f..56dd3a4861 100644 --- a/synapse/storage/_base.py +++ b/synapse/storage/_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017-2018 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/synapse/storage/background_updates.py b/synapse/storage/background_updates.py index ccb06aab39..142787fdfd 100644 --- a/synapse/storage/background_updates.py +++ b/synapse/storage/background_updates.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/database.py b/synapse/storage/database.py index 77ef29ec71..9a6d2b21f9 100644 --- a/synapse/storage/database.py +++ b/synapse/storage/database.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017-2018 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/synapse/storage/databases/__init__.py b/synapse/storage/databases/__init__.py index 379c78bb83..20b755056b 100644 --- a/synapse/storage/databases/__init__.py +++ b/synapse/storage/databases/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/__init__.py b/synapse/storage/databases/main/__init__.py index b3d16ca7ac..5c50f5f950 100644 --- a/synapse/storage/databases/main/__init__.py +++ b/synapse/storage/databases/main/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # Copyright 2019-2021 The Matrix.org Foundation C.I.C. diff --git a/synapse/storage/databases/main/account_data.py b/synapse/storage/databases/main/account_data.py index a277a1ef13..1d02795f43 100644 --- a/synapse/storage/databases/main/account_data.py +++ b/synapse/storage/databases/main/account_data.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/storage/databases/main/appservice.py b/synapse/storage/databases/main/appservice.py index 85bb853d33..9f182c2a89 100644 --- a/synapse/storage/databases/main/appservice.py +++ b/synapse/storage/databases/main/appservice.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/storage/databases/main/cache.py b/synapse/storage/databases/main/cache.py index 1e7637a6f5..ecc1f935e2 100644 --- a/synapse/storage/databases/main/cache.py +++ b/synapse/storage/databases/main/cache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/censor_events.py b/synapse/storage/databases/main/censor_events.py index 3e26d5ba87..f22c1f241b 100644 --- a/synapse/storage/databases/main/censor_events.py +++ b/synapse/storage/databases/main/censor_events.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/client_ips.py b/synapse/storage/databases/main/client_ips.py index ea3c15fd0e..d60010e942 100644 --- a/synapse/storage/databases/main/client_ips.py +++ b/synapse/storage/databases/main/client_ips.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/deviceinbox.py b/synapse/storage/databases/main/deviceinbox.py index 691080ce74..7c9d1f744e 100644 --- a/synapse/storage/databases/main/deviceinbox.py +++ b/synapse/storage/databases/main/deviceinbox.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/devices.py b/synapse/storage/databases/main/devices.py index 9bf8ba888f..b204875580 100644 --- a/synapse/storage/databases/main/devices.py +++ b/synapse/storage/databases/main/devices.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2019 New Vector Ltd # Copyright 2019,2020 The Matrix.org Foundation C.I.C. diff --git a/synapse/storage/databases/main/directory.py b/synapse/storage/databases/main/directory.py index 267b948397..86075bc55b 100644 --- a/synapse/storage/databases/main/directory.py +++ b/synapse/storage/databases/main/directory.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/e2e_room_keys.py b/synapse/storage/databases/main/e2e_room_keys.py index 12cecceec2..b15fb71e62 100644 --- a/synapse/storage/databases/main/e2e_room_keys.py +++ b/synapse/storage/databases/main/e2e_room_keys.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # Copyright 2019 Matrix.org Foundation C.I.C. # diff --git a/synapse/storage/databases/main/end_to_end_keys.py b/synapse/storage/databases/main/end_to_end_keys.py index f1e7859d26..88afe97c41 100644 --- a/synapse/storage/databases/main/end_to_end_keys.py +++ b/synapse/storage/databases/main/end_to_end_keys.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2019 New Vector Ltd # Copyright 2019,2020 The Matrix.org Foundation C.I.C. diff --git a/synapse/storage/databases/main/event_federation.py b/synapse/storage/databases/main/event_federation.py index a956be491a..32ce70a396 100644 --- a/synapse/storage/databases/main/event_federation.py +++ b/synapse/storage/databases/main/event_federation.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/event_push_actions.py b/synapse/storage/databases/main/event_push_actions.py index 78245ad5bd..5845322118 100644 --- a/synapse/storage/databases/main/event_push_actions.py +++ b/synapse/storage/databases/main/event_push_actions.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/storage/databases/main/events.py b/synapse/storage/databases/main/events.py index ad17123915..bed4326d11 100644 --- a/synapse/storage/databases/main/events.py +++ b/synapse/storage/databases/main/events.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018-2019 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/synapse/storage/databases/main/events_bg_updates.py b/synapse/storage/databases/main/events_bg_updates.py index 79e7df6ca9..cbe4be1437 100644 --- a/synapse/storage/databases/main/events_bg_updates.py +++ b/synapse/storage/databases/main/events_bg_updates.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/events_forward_extremities.py b/synapse/storage/databases/main/events_forward_extremities.py index b3703ae161..6d2688d711 100644 --- a/synapse/storage/databases/main/events_forward_extremities.py +++ b/synapse/storage/databases/main/events_forward_extremities.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/events_worker.py b/synapse/storage/databases/main/events_worker.py index c00780969f..64d70785b8 100644 --- a/synapse/storage/databases/main/events_worker.py +++ b/synapse/storage/databases/main/events_worker.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/filtering.py b/synapse/storage/databases/main/filtering.py index d2f5b9a502..bb244a03c0 100644 --- a/synapse/storage/databases/main/filtering.py +++ b/synapse/storage/databases/main/filtering.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/group_server.py b/synapse/storage/databases/main/group_server.py index bd7826f4e9..66ad363bfb 100644 --- a/synapse/storage/databases/main/group_server.py +++ b/synapse/storage/databases/main/group_server.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/storage/databases/main/keys.py b/synapse/storage/databases/main/keys.py index d504323b03..0e86807834 100644 --- a/synapse/storage/databases/main/keys.py +++ b/synapse/storage/databases/main/keys.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2019 New Vector Ltd. # diff --git a/synapse/storage/databases/main/media_repository.py b/synapse/storage/databases/main/media_repository.py index b7820ac7ff..c584868188 100644 --- a/synapse/storage/databases/main/media_repository.py +++ b/synapse/storage/databases/main/media_repository.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2020-2021 The Matrix.org Foundation C.I.C. # diff --git a/synapse/storage/databases/main/metrics.py b/synapse/storage/databases/main/metrics.py index 614a418a15..c3f551d377 100644 --- a/synapse/storage/databases/main/metrics.py +++ b/synapse/storage/databases/main/metrics.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/monthly_active_users.py b/synapse/storage/databases/main/monthly_active_users.py index 757da3d55d..fe25638289 100644 --- a/synapse/storage/databases/main/monthly_active_users.py +++ b/synapse/storage/databases/main/monthly_active_users.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/presence.py b/synapse/storage/databases/main/presence.py index 0ff693a310..c207d917b1 100644 --- a/synapse/storage/databases/main/presence.py +++ b/synapse/storage/databases/main/presence.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/profile.py b/synapse/storage/databases/main/profile.py index ba01d3108a..9b4e95e134 100644 --- a/synapse/storage/databases/main/profile.py +++ b/synapse/storage/databases/main/profile.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/purge_events.py b/synapse/storage/databases/main/purge_events.py index 41f4fe7f95..8f83748b5e 100644 --- a/synapse/storage/databases/main/purge_events.py +++ b/synapse/storage/databases/main/purge_events.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/push_rule.py b/synapse/storage/databases/main/push_rule.py index 9e58dc0e6a..db52176337 100644 --- a/synapse/storage/databases/main/push_rule.py +++ b/synapse/storage/databases/main/push_rule.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/storage/databases/main/pusher.py b/synapse/storage/databases/main/pusher.py index c65558c280..b48fe086d4 100644 --- a/synapse/storage/databases/main/pusher.py +++ b/synapse/storage/databases/main/pusher.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/storage/databases/main/receipts.py b/synapse/storage/databases/main/receipts.py index 43c852c96c..3647276acb 100644 --- a/synapse/storage/databases/main/receipts.py +++ b/synapse/storage/databases/main/receipts.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/storage/databases/main/registration.py b/synapse/storage/databases/main/registration.py index 90a8f664ef..833214b7e0 100644 --- a/synapse/storage/databases/main/registration.py +++ b/synapse/storage/databases/main/registration.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017-2018 New Vector Ltd # Copyright 2019,2020 The Matrix.org Foundation C.I.C. diff --git a/synapse/storage/databases/main/rejections.py b/synapse/storage/databases/main/rejections.py index 1e361aaa9a..167318b314 100644 --- a/synapse/storage/databases/main/rejections.py +++ b/synapse/storage/databases/main/rejections.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/relations.py b/synapse/storage/databases/main/relations.py index 5cd61547f7..2bbf6d6a95 100644 --- a/synapse/storage/databases/main/relations.py +++ b/synapse/storage/databases/main/relations.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/room.py b/synapse/storage/databases/main/room.py index 47fb12f3f6..5f38634f48 100644 --- a/synapse/storage/databases/main/room.py +++ b/synapse/storage/databases/main/room.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. # diff --git a/synapse/storage/databases/main/roommember.py b/synapse/storage/databases/main/roommember.py index a9216ca9ae..ef5587f87a 100644 --- a/synapse/storage/databases/main/roommember.py +++ b/synapse/storage/databases/main/roommember.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/storage/databases/main/schema/delta/50/make_event_content_nullable.py b/synapse/storage/databases/main/schema/delta/50/make_event_content_nullable.py index b1684a8441..acd6ad1e1f 100644 --- a/synapse/storage/databases/main/schema/delta/50/make_event_content_nullable.py +++ b/synapse/storage/databases/main/schema/delta/50/make_event_content_nullable.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/schema/delta/57/local_current_membership.py b/synapse/storage/databases/main/schema/delta/57/local_current_membership.py index 44917f0a2e..66989222e6 100644 --- a/synapse/storage/databases/main/schema/delta/57/local_current_membership.py +++ b/synapse/storage/databases/main/schema/delta/57/local_current_membership.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/search.py b/synapse/storage/databases/main/search.py index f5e7d9ef98..0276f30656 100644 --- a/synapse/storage/databases/main/search.py +++ b/synapse/storage/databases/main/search.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/signatures.py b/synapse/storage/databases/main/signatures.py index c8c67953e4..ab2159c2d3 100644 --- a/synapse/storage/databases/main/signatures.py +++ b/synapse/storage/databases/main/signatures.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/state.py b/synapse/storage/databases/main/state.py index 93431efe00..1757064a68 100644 --- a/synapse/storage/databases/main/state.py +++ b/synapse/storage/databases/main/state.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. # diff --git a/synapse/storage/databases/main/state_deltas.py b/synapse/storage/databases/main/state_deltas.py index 0dbb501f16..bff7d0404f 100644 --- a/synapse/storage/databases/main/state_deltas.py +++ b/synapse/storage/databases/main/state_deltas.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/stats.py b/synapse/storage/databases/main/stats.py index bce8946c21..ae9f880965 100644 --- a/synapse/storage/databases/main/stats.py +++ b/synapse/storage/databases/main/stats.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018, 2019 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. # diff --git a/synapse/storage/databases/main/stream.py b/synapse/storage/databases/main/stream.py index 91f8abb67d..db5ce4ea01 100644 --- a/synapse/storage/databases/main/stream.py +++ b/synapse/storage/databases/main/stream.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # Copyright 2018-2019 New Vector Ltd diff --git a/synapse/storage/databases/main/tags.py b/synapse/storage/databases/main/tags.py index 50067eabfc..1d62c6140f 100644 --- a/synapse/storage/databases/main/tags.py +++ b/synapse/storage/databases/main/tags.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/storage/databases/main/transactions.py b/synapse/storage/databases/main/transactions.py index b7072f1f5e..82335e7a9d 100644 --- a/synapse/storage/databases/main/transactions.py +++ b/synapse/storage/databases/main/transactions.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/ui_auth.py b/synapse/storage/databases/main/ui_auth.py index 5473ec1485..22c05cdde7 100644 --- a/synapse/storage/databases/main/ui_auth.py +++ b/synapse/storage/databases/main/ui_auth.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/user_directory.py b/synapse/storage/databases/main/user_directory.py index 1026f321e5..7a082fdd21 100644 --- a/synapse/storage/databases/main/user_directory.py +++ b/synapse/storage/databases/main/user_directory.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/main/user_erasure_store.py b/synapse/storage/databases/main/user_erasure_store.py index f9575b1f1f..acf6b2fb64 100644 --- a/synapse/storage/databases/main/user_erasure_store.py +++ b/synapse/storage/databases/main/user_erasure_store.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/state/__init__.py b/synapse/storage/databases/state/__init__.py index c90d022899..e5100d6108 100644 --- a/synapse/storage/databases/state/__init__.py +++ b/synapse/storage/databases/state/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/state/bg_updates.py b/synapse/storage/databases/state/bg_updates.py index 75c09b3687..c2891cb07f 100644 --- a/synapse/storage/databases/state/bg_updates.py +++ b/synapse/storage/databases/state/bg_updates.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/databases/state/store.py b/synapse/storage/databases/state/store.py index dfcf89d91c..e38461adbc 100644 --- a/synapse/storage/databases/state/store.py +++ b/synapse/storage/databases/state/store.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/engines/__init__.py b/synapse/storage/engines/__init__.py index d15ccfacde..9abc02046e 100644 --- a/synapse/storage/engines/__init__.py +++ b/synapse/storage/engines/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/engines/_base.py b/synapse/storage/engines/_base.py index 21db1645d3..1882bfd9cf 100644 --- a/synapse/storage/engines/_base.py +++ b/synapse/storage/engines/_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/engines/postgres.py b/synapse/storage/engines/postgres.py index dba8cc51d3..21411c5fea 100644 --- a/synapse/storage/engines/postgres.py +++ b/synapse/storage/engines/postgres.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/engines/sqlite.py b/synapse/storage/engines/sqlite.py index f4f16456f2..5fe1b205e1 100644 --- a/synapse/storage/engines/sqlite.py +++ b/synapse/storage/engines/sqlite.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/keys.py b/synapse/storage/keys.py index c03871f393..540adb8781 100644 --- a/synapse/storage/keys.py +++ b/synapse/storage/keys.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2019 New Vector Ltd. # diff --git a/synapse/storage/persist_events.py b/synapse/storage/persist_events.py index 3a0d6fb32e..87e040b014 100644 --- a/synapse/storage/persist_events.py +++ b/synapse/storage/persist_events.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018-2019 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/synapse/storage/prepare_database.py b/synapse/storage/prepare_database.py index c7f0b8ccb5..05a9355974 100644 --- a/synapse/storage/prepare_database.py +++ b/synapse/storage/prepare_database.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/storage/purge_events.py b/synapse/storage/purge_events.py index ad954990a7..30669beb7c 100644 --- a/synapse/storage/purge_events.py +++ b/synapse/storage/purge_events.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/push_rule.py b/synapse/storage/push_rule.py index f47cec0d86..2d5c21ef72 100644 --- a/synapse/storage/push_rule.py +++ b/synapse/storage/push_rule.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/storage/relations.py b/synapse/storage/relations.py index 2564f34b47..c552dbf04c 100644 --- a/synapse/storage/relations.py +++ b/synapse/storage/relations.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/roommember.py b/synapse/storage/roommember.py index d2ff4da6b9..c34fbf21bc 100644 --- a/synapse/storage/roommember.py +++ b/synapse/storage/roommember.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/storage/state.py b/synapse/storage/state.py index c1c147c62a..cfafba22c5 100644 --- a/synapse/storage/state.py +++ b/synapse/storage/state.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/types.py b/synapse/storage/types.py index 17291c9d5e..57f4883bf4 100644 --- a/synapse/storage/types.py +++ b/synapse/storage/types.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/util/__init__.py b/synapse/storage/util/__init__.py index bfebb0f644..5e83dba2ed 100644 --- a/synapse/storage/util/__init__.py +++ b/synapse/storage/util/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/util/id_generators.py b/synapse/storage/util/id_generators.py index 32d6cc16b9..b1bd3a52d9 100644 --- a/synapse/storage/util/id_generators.py +++ b/synapse/storage/util/id_generators.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/storage/util/sequence.py b/synapse/storage/util/sequence.py index 36a67e7019..30b6b8e0ca 100644 --- a/synapse/storage/util/sequence.py +++ b/synapse/storage/util/sequence.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/streams/__init__.py b/synapse/streams/__init__.py index bfebb0f644..5e83dba2ed 100644 --- a/synapse/streams/__init__.py +++ b/synapse/streams/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/streams/config.py b/synapse/streams/config.py index fdda21d165..13d300588b 100644 --- a/synapse/streams/config.py +++ b/synapse/streams/config.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/streams/events.py b/synapse/streams/events.py index 92fd5d489f..20fceaa935 100644 --- a/synapse/streams/events.py +++ b/synapse/streams/events.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/types.py b/synapse/types.py index b08ce90140..21654ae686 100644 --- a/synapse/types.py +++ b/synapse/types.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. # diff --git a/synapse/util/__init__.py b/synapse/util/__init__.py index 517686f0a6..0f84fa3f4e 100644 --- a/synapse/util/__init__.py +++ b/synapse/util/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/async_helpers.py b/synapse/util/async_helpers.py index c3b2d981ea..5c55bb0125 100644 --- a/synapse/util/async_helpers.py +++ b/synapse/util/async_helpers.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/util/caches/__init__.py b/synapse/util/caches/__init__.py index 48f64eeb38..46af7fa473 100644 --- a/synapse/util/caches/__init__.py +++ b/synapse/util/caches/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2019, 2020 The Matrix.org Foundation C.I.C. # diff --git a/synapse/util/caches/cached_call.py b/synapse/util/caches/cached_call.py index 3ee0f2317a..a301c9e89b 100644 --- a/synapse/util/caches/cached_call.py +++ b/synapse/util/caches/cached_call.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/caches/deferred_cache.py b/synapse/util/caches/deferred_cache.py index dd392cf694..484097a48a 100644 --- a/synapse/util/caches/deferred_cache.py +++ b/synapse/util/caches/deferred_cache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. diff --git a/synapse/util/caches/descriptors.py b/synapse/util/caches/descriptors.py index 4e84379914..ac4a078b26 100644 --- a/synapse/util/caches/descriptors.py +++ b/synapse/util/caches/descriptors.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synapse/util/caches/dictionary_cache.py b/synapse/util/caches/dictionary_cache.py index b3b413b02c..56d94d96ce 100644 --- a/synapse/util/caches/dictionary_cache.py +++ b/synapse/util/caches/dictionary_cache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/caches/expiringcache.py b/synapse/util/caches/expiringcache.py index 4dc3477e89..ac47a31cd7 100644 --- a/synapse/util/caches/expiringcache.py +++ b/synapse/util/caches/expiringcache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/caches/lrucache.py b/synapse/util/caches/lrucache.py index 20c8e2d9f5..a21d34fcb4 100644 --- a/synapse/util/caches/lrucache.py +++ b/synapse/util/caches/lrucache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/caches/response_cache.py b/synapse/util/caches/response_cache.py index 46ea8e0964..2529845c9e 100644 --- a/synapse/util/caches/response_cache.py +++ b/synapse/util/caches/response_cache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/caches/stream_change_cache.py b/synapse/util/caches/stream_change_cache.py index 644e9e778a..0469e7d120 100644 --- a/synapse/util/caches/stream_change_cache.py +++ b/synapse/util/caches/stream_change_cache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/caches/ttlcache.py b/synapse/util/caches/ttlcache.py index 96a8274940..c276107d56 100644 --- a/synapse/util/caches/ttlcache.py +++ b/synapse/util/caches/ttlcache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/daemonize.py b/synapse/util/daemonize.py index 23393cf49b..31b24dd188 100644 --- a/synapse/util/daemonize.py +++ b/synapse/util/daemonize.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright (c) 2012, 2013, 2014 Ilya Otyutskiy # Copyright 2020 The Matrix.org Foundation C.I.C. # diff --git a/synapse/util/distributor.py b/synapse/util/distributor.py index 3c47285d05..1f803aef6d 100644 --- a/synapse/util/distributor.py +++ b/synapse/util/distributor.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/file_consumer.py b/synapse/util/file_consumer.py index 68dc632491..e946189f9a 100644 --- a/synapse/util/file_consumer.py +++ b/synapse/util/file_consumer.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/frozenutils.py b/synapse/util/frozenutils.py index 5ca2e71e60..2ac7c2913c 100644 --- a/synapse/util/frozenutils.py +++ b/synapse/util/frozenutils.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/hash.py b/synapse/util/hash.py index 359168704e..ba676e1762 100644 --- a/synapse/util/hash.py +++ b/synapse/util/hash.py @@ -1,5 +1,3 @@ -# -*- coding: utf-8 -*- - # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/iterutils.py b/synapse/util/iterutils.py index 98707c119d..6f73b1d56d 100644 --- a/synapse/util/iterutils.py +++ b/synapse/util/iterutils.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. # diff --git a/synapse/util/jsonobject.py b/synapse/util/jsonobject.py index e3a8ed5b2f..abc12f0837 100644 --- a/synapse/util/jsonobject.py +++ b/synapse/util/jsonobject.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/macaroons.py b/synapse/util/macaroons.py index 12cdd53327..f6ebfd7e7d 100644 --- a/synapse/util/macaroons.py +++ b/synapse/util/macaroons.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Quentin Gliech # Copyright 2021 The Matrix.org Foundation C.I.C. # diff --git a/synapse/util/metrics.py b/synapse/util/metrics.py index 1023c856d1..6ed7179e8c 100644 --- a/synapse/util/metrics.py +++ b/synapse/util/metrics.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/module_loader.py b/synapse/util/module_loader.py index d184e2a90c..8acbe276e4 100644 --- a/synapse/util/module_loader.py +++ b/synapse/util/module_loader.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/msisdn.py b/synapse/util/msisdn.py index c8bcbe297a..bbbdebf264 100644 --- a/synapse/util/msisdn.py +++ b/synapse/util/msisdn.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/patch_inline_callbacks.py b/synapse/util/patch_inline_callbacks.py index d9f9ae99d6..eed0291cae 100644 --- a/synapse/util/patch_inline_callbacks.py +++ b/synapse/util/patch_inline_callbacks.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/ratelimitutils.py b/synapse/util/ratelimitutils.py index 70d11e1ec3..a654c69684 100644 --- a/synapse/util/ratelimitutils.py +++ b/synapse/util/ratelimitutils.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/retryutils.py b/synapse/util/retryutils.py index 4ab379e429..f9c370a814 100644 --- a/synapse/util/retryutils.py +++ b/synapse/util/retryutils.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/rlimit.py b/synapse/util/rlimit.py index 207cd17c2a..bf812ab516 100644 --- a/synapse/util/rlimit.py +++ b/synapse/util/rlimit.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/stringutils.py b/synapse/util/stringutils.py index 9ce7873ab5..c0e6fb9a60 100644 --- a/synapse/util/stringutils.py +++ b/synapse/util/stringutils.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. # diff --git a/synapse/util/templates.py b/synapse/util/templates.py index 392dae4a40..38543dd1ea 100644 --- a/synapse/util/templates.py +++ b/synapse/util/templates.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/threepids.py b/synapse/util/threepids.py index 43c2e0ac23..281c5be4fb 100644 --- a/synapse/util/threepids.py +++ b/synapse/util/threepids.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/versionstring.py b/synapse/util/versionstring.py index ab7d03af3a..dfa30a6229 100644 --- a/synapse/util/versionstring.py +++ b/synapse/util/versionstring.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/util/wheel_timer.py b/synapse/util/wheel_timer.py index be3b22469d..61814aff24 100644 --- a/synapse/util/wheel_timer.py +++ b/synapse/util/wheel_timer.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synapse/visibility.py b/synapse/visibility.py index ff53a49b3a..490fb26e81 100644 --- a/synapse/visibility.py +++ b/synapse/visibility.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synctl b/synctl index 56c0e3940f..ccf404accb 100755 --- a/synctl +++ b/synctl @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/synmark/__init__.py b/synmark/__init__.py index 3d4ec3e184..2cc00b0f03 100644 --- a/synmark/__init__.py +++ b/synmark/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synmark/__main__.py b/synmark/__main__.py index f55968a5a4..35a59e347a 100644 --- a/synmark/__main__.py +++ b/synmark/__main__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synmark/suites/logging.py b/synmark/suites/logging.py index b3abc6b254..9419892e95 100644 --- a/synmark/suites/logging.py +++ b/synmark/suites/logging.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synmark/suites/lrucache.py b/synmark/suites/lrucache.py index 69ab042ccc..9b4a424149 100644 --- a/synmark/suites/lrucache.py +++ b/synmark/suites/lrucache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/synmark/suites/lrucache_evict.py b/synmark/suites/lrucache_evict.py index 532b1cc702..0ee202ed36 100644 --- a/synmark/suites/lrucache_evict.py +++ b/synmark/suites/lrucache_evict.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/__init__.py b/tests/__init__.py index ed805db1c2..5fced5cc4c 100644 --- a/tests/__init__.py +++ b/tests/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/tests/api/test_auth.py b/tests/api/test_auth.py index 28d77f0ca2..c0ed64f784 100644 --- a/tests/api/test_auth.py +++ b/tests/api/test_auth.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015 - 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/api/test_filtering.py b/tests/api/test_filtering.py index ab7d290724..f44c91a373 100644 --- a/tests/api/test_filtering.py +++ b/tests/api/test_filtering.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # Copyright 2018-2019 New Vector Ltd diff --git a/tests/app/test_frontend_proxy.py b/tests/app/test_frontend_proxy.py index e0ca288829..3d45da38ab 100644 --- a/tests/app/test_frontend_proxy.py +++ b/tests/app/test_frontend_proxy.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/app/test_openid_listener.py b/tests/app/test_openid_listener.py index 33a37fe35e..276f09015e 100644 --- a/tests/app/test_openid_listener.py +++ b/tests/app/test_openid_listener.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/appservice/__init__.py b/tests/appservice/__init__.py index fe0ac3f8e9..629e2df74a 100644 --- a/tests/appservice/__init__.py +++ b/tests/appservice/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/appservice/test_appservice.py b/tests/appservice/test_appservice.py index 03a7440eec..f386b5e128 100644 --- a/tests/appservice/test_appservice.py +++ b/tests/appservice/test_appservice.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/appservice/test_scheduler.py b/tests/appservice/test_scheduler.py index 3c27d797fb..a2b5ed2030 100644 --- a/tests/appservice/test_scheduler.py +++ b/tests/appservice/test_scheduler.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/config/__init__.py b/tests/config/__init__.py index b7df13c9ee..f43a360a80 100644 --- a/tests/config/__init__.py +++ b/tests/config/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/config/test_base.py b/tests/config/test_base.py index 42ee5f56d9..84ae3b88ae 100644 --- a/tests/config/test_base.py +++ b/tests/config/test_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/config/test_cache.py b/tests/config/test_cache.py index 2b7f09c14b..857d9cd096 100644 --- a/tests/config/test_cache.py +++ b/tests/config/test_cache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/config/test_database.py b/tests/config/test_database.py index f675bde68e..9eca10bbe9 100644 --- a/tests/config/test_database.py +++ b/tests/config/test_database.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/config/test_generate.py b/tests/config/test_generate.py index 463855ecc8..fdfbb0e38e 100644 --- a/tests/config/test_generate.py +++ b/tests/config/test_generate.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/config/test_load.py b/tests/config/test_load.py index c109425671..ebe2c05165 100644 --- a/tests/config/test_load.py +++ b/tests/config/test_load.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/config/test_ratelimiting.py b/tests/config/test_ratelimiting.py index 13ab282384..3c7bb32e07 100644 --- a/tests/config/test_ratelimiting.py +++ b/tests/config/test_ratelimiting.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/config/test_room_directory.py b/tests/config/test_room_directory.py index 0ec10019b3..db745815ef 100644 --- a/tests/config/test_room_directory.py +++ b/tests/config/test_room_directory.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/config/test_server.py b/tests/config/test_server.py index 98af7aa675..6f2b9e997d 100644 --- a/tests/config/test_server.py +++ b/tests/config/test_server.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/config/test_tls.py b/tests/config/test_tls.py index ec32d4b1ca..183034f7d4 100644 --- a/tests/config/test_tls.py +++ b/tests/config/test_tls.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # Copyright 2019 Matrix.org Foundation C.I.C. # diff --git a/tests/config/test_util.py b/tests/config/test_util.py index 10363e3765..3d4929daac 100644 --- a/tests/config/test_util.py +++ b/tests/config/test_util.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/crypto/__init__.py b/tests/crypto/__init__.py index bfebb0f644..5e83dba2ed 100644 --- a/tests/crypto/__init__.py +++ b/tests/crypto/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/crypto/test_event_signing.py b/tests/crypto/test_event_signing.py index 62f639a18d..1c920157f5 100644 --- a/tests/crypto/test_event_signing.py +++ b/tests/crypto/test_event_signing.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/crypto/test_keyring.py b/tests/crypto/test_keyring.py index a56063315b..2775dfd880 100644 --- a/tests/crypto/test_keyring.py +++ b/tests/crypto/test_keyring.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/events/test_presence_router.py b/tests/events/test_presence_router.py index c996ecc221..01d257307c 100644 --- a/tests/events/test_presence_router.py +++ b/tests/events/test_presence_router.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the 'License'); diff --git a/tests/events/test_snapshot.py b/tests/events/test_snapshot.py index ec85324c0c..48e98aac79 100644 --- a/tests/events/test_snapshot.py +++ b/tests/events/test_snapshot.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/events/test_utils.py b/tests/events/test_utils.py index 8ba36c6074..9274ce4c39 100644 --- a/tests/events/test_utils.py +++ b/tests/events/test_utils.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the 'License'); diff --git a/tests/federation/test_complexity.py b/tests/federation/test_complexity.py index 701fa8379f..1a809b2a6a 100644 --- a/tests/federation/test_complexity.py +++ b/tests/federation/test_complexity.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 Matrix.org Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/federation/test_federation_sender.py b/tests/federation/test_federation_sender.py index deb12433cf..b00dd143d6 100644 --- a/tests/federation/test_federation_sender.py +++ b/tests/federation/test_federation_sender.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/federation/test_federation_server.py b/tests/federation/test_federation_server.py index cfeccc0577..8508b6bd3b 100644 --- a/tests/federation/test_federation_server.py +++ b/tests/federation/test_federation_server.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # Copyright 2019 Matrix.org Federation C.I.C # diff --git a/tests/federation/transport/test_server.py b/tests/federation/transport/test_server.py index 85500e169c..84fa72b9ff 100644 --- a/tests/federation/transport/test_server.py +++ b/tests/federation/transport/test_server.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/handlers/test_admin.py b/tests/handlers/test_admin.py index 32669ae9ce..18a734daf4 100644 --- a/tests/handlers/test_admin.py +++ b/tests/handlers/test_admin.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/handlers/test_appservice.py b/tests/handlers/test_appservice.py index 6e325b24ce..b037b12a0f 100644 --- a/tests/handlers/test_appservice.py +++ b/tests/handlers/test_appservice.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/handlers/test_auth.py b/tests/handlers/test_auth.py index 321c5ba045..fe7e9484fd 100644 --- a/tests/handlers/test_auth.py +++ b/tests/handlers/test_auth.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/handlers/test_device.py b/tests/handlers/test_device.py index 821629bc38..84c38b295d 100644 --- a/tests/handlers/test_device.py +++ b/tests/handlers/test_device.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # Copyright 2020 The Matrix.org Foundation C.I.C. diff --git a/tests/handlers/test_directory.py b/tests/handlers/test_directory.py index 6ae9d4f865..1908d3c2c6 100644 --- a/tests/handlers/test_directory.py +++ b/tests/handlers/test_directory.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/handlers/test_e2e_keys.py b/tests/handlers/test_e2e_keys.py index 6915ac0205..61a00130b8 100644 --- a/tests/handlers/test_e2e_keys.py +++ b/tests/handlers/test_e2e_keys.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2019 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/tests/handlers/test_e2e_room_keys.py b/tests/handlers/test_e2e_room_keys.py index 07893302ec..9b7e7a8e9a 100644 --- a/tests/handlers/test_e2e_room_keys.py +++ b/tests/handlers/test_e2e_room_keys.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2017 New Vector Ltd # Copyright 2019 Matrix.org Foundation C.I.C. diff --git a/tests/handlers/test_federation.py b/tests/handlers/test_federation.py index 3af361195b..c7b0975a19 100644 --- a/tests/handlers/test_federation.py +++ b/tests/handlers/test_federation.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/handlers/test_message.py b/tests/handlers/test_message.py index a0d1ebdbe3..a8a9fc5b62 100644 --- a/tests/handlers/test_message.py +++ b/tests/handlers/test_message.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/handlers/test_oidc.py b/tests/handlers/test_oidc.py index 8702ee70e0..34d2fc1dfb 100644 --- a/tests/handlers/test_oidc.py +++ b/tests/handlers/test_oidc.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Quentin Gliech # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/handlers/test_password_providers.py b/tests/handlers/test_password_providers.py index e28e4159eb..32651db096 100644 --- a/tests/handlers/test_password_providers.py +++ b/tests/handlers/test_password_providers.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/handlers/test_presence.py b/tests/handlers/test_presence.py index 9f16cc65fc..2d12e82897 100644 --- a/tests/handlers/test_presence.py +++ b/tests/handlers/test_presence.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/handlers/test_profile.py b/tests/handlers/test_profile.py index d8b1bcac8b..5330a9b34e 100644 --- a/tests/handlers/test_profile.py +++ b/tests/handlers/test_profile.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/handlers/test_register.py b/tests/handlers/test_register.py index 69279a5ce9..608f8f3d33 100644 --- a/tests/handlers/test_register.py +++ b/tests/handlers/test_register.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/handlers/test_stats.py b/tests/handlers/test_stats.py index 312c0a0d41..c9d4fd9336 100644 --- a/tests/handlers/test_stats.py +++ b/tests/handlers/test_stats.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/handlers/test_sync.py b/tests/handlers/test_sync.py index 8e950f25c5..c8b43305f4 100644 --- a/tests/handlers/test_sync.py +++ b/tests/handlers/test_sync.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/handlers/test_typing.py b/tests/handlers/test_typing.py index 9fa231a37a..0c89487eaf 100644 --- a/tests/handlers/test_typing.py +++ b/tests/handlers/test_typing.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/handlers/test_user_directory.py b/tests/handlers/test_user_directory.py index c68cb830af..daac37abd8 100644 --- a/tests/handlers/test_user_directory.py +++ b/tests/handlers/test_user_directory.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/http/__init__.py b/tests/http/__init__.py index 3e5a856584..e74f7f5b48 100644 --- a/tests/http/__init__.py +++ b/tests/http/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/http/federation/__init__.py b/tests/http/federation/__init__.py index 1453d04571..743fb9904a 100644 --- a/tests/http/federation/__init__.py +++ b/tests/http/federation/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/http/federation/test_matrix_federation_agent.py b/tests/http/federation/test_matrix_federation_agent.py index ae9d4504a8..e45980316b 100644 --- a/tests/http/federation/test_matrix_federation_agent.py +++ b/tests/http/federation/test_matrix_federation_agent.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/http/federation/test_srv_resolver.py b/tests/http/federation/test_srv_resolver.py index 466ce722d9..c49be33b9f 100644 --- a/tests/http/federation/test_srv_resolver.py +++ b/tests/http/federation/test_srv_resolver.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2019 New Vector Ltd # diff --git a/tests/http/test_additional_resource.py b/tests/http/test_additional_resource.py index 453391a5a5..768c2ba4ea 100644 --- a/tests/http/test_additional_resource.py +++ b/tests/http/test_additional_resource.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/http/test_endpoint.py b/tests/http/test_endpoint.py index d06ea518ce..1f9a2f9b1d 100644 --- a/tests/http/test_endpoint.py +++ b/tests/http/test_endpoint.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/http/test_fedclient.py b/tests/http/test_fedclient.py index 21c1297171..9e97185507 100644 --- a/tests/http/test_fedclient.py +++ b/tests/http/test_fedclient.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/http/test_proxyagent.py b/tests/http/test_proxyagent.py index 3ea8b5bec7..fefc8099c9 100644 --- a/tests/http/test_proxyagent.py +++ b/tests/http/test_proxyagent.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/http/test_servlet.py b/tests/http/test_servlet.py index f979c96f7c..a80bfb9f4e 100644 --- a/tests/http/test_servlet.py +++ b/tests/http/test_servlet.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/http/test_simple_client.py b/tests/http/test_simple_client.py index cc4cae320d..c85a3665c1 100644 --- a/tests/http/test_simple_client.py +++ b/tests/http/test_simple_client.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/logging/__init__.py b/tests/logging/__init__.py index a58d51441c..1acf5666a8 100644 --- a/tests/logging/__init__.py +++ b/tests/logging/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/logging/test_remote_handler.py b/tests/logging/test_remote_handler.py index 4bc27a1d7d..b0d046fe00 100644 --- a/tests/logging/test_remote_handler.py +++ b/tests/logging/test_remote_handler.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/logging/test_terse_json.py b/tests/logging/test_terse_json.py index 215fd8b0f9..6cddb95c30 100644 --- a/tests/logging/test_terse_json.py +++ b/tests/logging/test_terse_json.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/module_api/test_api.py b/tests/module_api/test_api.py index 349f93560e..742ad14b8c 100644 --- a/tests/module_api/test_api.py +++ b/tests/module_api/test_api.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/push/test_email.py b/tests/push/test_email.py index 941cf42429..e04bc5c9a6 100644 --- a/tests/push/test_email.py +++ b/tests/push/test_email.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/push/test_http.py b/tests/push/test_http.py index 4074ade87a..ffd75b1491 100644 --- a/tests/push/test_http.py +++ b/tests/push/test_http.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/push/test_push_rule_evaluator.py b/tests/push/test_push_rule_evaluator.py index 4a841f5bb8..45906ce720 100644 --- a/tests/push/test_push_rule_evaluator.py +++ b/tests/push/test_push_rule_evaluator.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/__init__.py b/tests/replication/__init__.py index b7df13c9ee..f43a360a80 100644 --- a/tests/replication/__init__.py +++ b/tests/replication/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/_base.py b/tests/replication/_base.py index aff19d9fb3..36138d69aa 100644 --- a/tests/replication/_base.py +++ b/tests/replication/_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/slave/__init__.py b/tests/replication/slave/__init__.py index b7df13c9ee..f43a360a80 100644 --- a/tests/replication/slave/__init__.py +++ b/tests/replication/slave/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/slave/storage/__init__.py b/tests/replication/slave/storage/__init__.py index b7df13c9ee..f43a360a80 100644 --- a/tests/replication/slave/storage/__init__.py +++ b/tests/replication/slave/storage/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/tcp/__init__.py b/tests/replication/tcp/__init__.py index 1453d04571..743fb9904a 100644 --- a/tests/replication/tcp/__init__.py +++ b/tests/replication/tcp/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/tcp/streams/__init__.py b/tests/replication/tcp/streams/__init__.py index 1453d04571..743fb9904a 100644 --- a/tests/replication/tcp/streams/__init__.py +++ b/tests/replication/tcp/streams/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/tcp/streams/test_account_data.py b/tests/replication/tcp/streams/test_account_data.py index 153634d4ee..cdd052001b 100644 --- a/tests/replication/tcp/streams/test_account_data.py +++ b/tests/replication/tcp/streams/test_account_data.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/tcp/streams/test_events.py b/tests/replication/tcp/streams/test_events.py index 77856fc304..323237c1bb 100644 --- a/tests/replication/tcp/streams/test_events.py +++ b/tests/replication/tcp/streams/test_events.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/tcp/streams/test_federation.py b/tests/replication/tcp/streams/test_federation.py index aa4bf1c7e3..ffec06a0d6 100644 --- a/tests/replication/tcp/streams/test_federation.py +++ b/tests/replication/tcp/streams/test_federation.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/tcp/streams/test_receipts.py b/tests/replication/tcp/streams/test_receipts.py index 7d848e41ff..7f5d932f0b 100644 --- a/tests/replication/tcp/streams/test_receipts.py +++ b/tests/replication/tcp/streams/test_receipts.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/tcp/streams/test_typing.py b/tests/replication/tcp/streams/test_typing.py index 4a0b342264..ecd360c2d0 100644 --- a/tests/replication/tcp/streams/test_typing.py +++ b/tests/replication/tcp/streams/test_typing.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/tcp/test_commands.py b/tests/replication/tcp/test_commands.py index 60c10a441a..cca7ebb719 100644 --- a/tests/replication/tcp/test_commands.py +++ b/tests/replication/tcp/test_commands.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/tcp/test_remote_server_up.py b/tests/replication/tcp/test_remote_server_up.py index 1fe9d5b4d0..262c35cef3 100644 --- a/tests/replication/tcp/test_remote_server_up.py +++ b/tests/replication/tcp/test_remote_server_up.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/test_auth.py b/tests/replication/test_auth.py index f8fd8a843c..1346e0e160 100644 --- a/tests/replication/test_auth.py +++ b/tests/replication/test_auth.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/test_client_reader_shard.py b/tests/replication/test_client_reader_shard.py index 5da1d5dc4d..b9751efdc5 100644 --- a/tests/replication/test_client_reader_shard.py +++ b/tests/replication/test_client_reader_shard.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/test_federation_ack.py b/tests/replication/test_federation_ack.py index 44ad5eec57..04a869e295 100644 --- a/tests/replication/test_federation_ack.py +++ b/tests/replication/test_federation_ack.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/test_federation_sender_shard.py b/tests/replication/test_federation_sender_shard.py index 8ca595c3ee..48ab3aa4e3 100644 --- a/tests/replication/test_federation_sender_shard.py +++ b/tests/replication/test_federation_sender_shard.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/test_multi_media_repo.py b/tests/replication/test_multi_media_repo.py index b0800f9840..76e6644353 100644 --- a/tests/replication/test_multi_media_repo.py +++ b/tests/replication/test_multi_media_repo.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/test_pusher_shard.py b/tests/replication/test_pusher_shard.py index 1f12bde1aa..1e4e3821b9 100644 --- a/tests/replication/test_pusher_shard.py +++ b/tests/replication/test_pusher_shard.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/replication/test_sharded_event_persister.py b/tests/replication/test_sharded_event_persister.py index 6c2e1674cb..d739eb6b17 100644 --- a/tests/replication/test_sharded_event_persister.py +++ b/tests/replication/test_sharded_event_persister.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/__init__.py b/tests/rest/__init__.py index fe0ac3f8e9..629e2df74a 100644 --- a/tests/rest/__init__.py +++ b/tests/rest/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/admin/__init__.py b/tests/rest/admin/__init__.py index 1453d04571..743fb9904a 100644 --- a/tests/rest/admin/__init__.py +++ b/tests/rest/admin/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/admin/test_admin.py b/tests/rest/admin/test_admin.py index 4abcbe3f55..2f7090e554 100644 --- a/tests/rest/admin/test_admin.py +++ b/tests/rest/admin/test_admin.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/admin/test_device.py b/tests/rest/admin/test_device.py index 2a1bcf1760..ecbee30bb5 100644 --- a/tests/rest/admin/test_device.py +++ b/tests/rest/admin/test_device.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Dirk Klimpel # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/admin/test_event_reports.py b/tests/rest/admin/test_event_reports.py index e30ffe4fa0..8c66da3af4 100644 --- a/tests/rest/admin/test_event_reports.py +++ b/tests/rest/admin/test_event_reports.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Dirk Klimpel # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/admin/test_media.py b/tests/rest/admin/test_media.py index 31db472cd3..ac7b219700 100644 --- a/tests/rest/admin/test_media.py +++ b/tests/rest/admin/test_media.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Dirk Klimpel # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/admin/test_room.py b/tests/rest/admin/test_room.py index 85f77c0a65..6bcd997085 100644 --- a/tests/rest/admin/test_room.py +++ b/tests/rest/admin/test_room.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Dirk Klimpel # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/admin/test_statistics.py b/tests/rest/admin/test_statistics.py index 1f1d11f527..363bdeeb2d 100644 --- a/tests/rest/admin/test_statistics.py +++ b/tests/rest/admin/test_statistics.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Dirk Klimpel # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/admin/test_user.py b/tests/rest/admin/test_user.py index 5070c96984..2844c493fc 100644 --- a/tests/rest/admin/test_user.py +++ b/tests/rest/admin/test_user.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/__init__.py b/tests/rest/client/__init__.py index fe0ac3f8e9..629e2df74a 100644 --- a/tests/rest/client/__init__.py +++ b/tests/rest/client/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/test_consent.py b/tests/rest/client/test_consent.py index c74693e9b2..5cc62a910a 100644 --- a/tests/rest/client/test_consent.py +++ b/tests/rest/client/test_consent.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/test_ephemeral_message.py b/tests/rest/client/test_ephemeral_message.py index 56937dcd2e..eec0fc01f9 100644 --- a/tests/rest/client/test_ephemeral_message.py +++ b/tests/rest/client/test_ephemeral_message.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/test_identity.py b/tests/rest/client/test_identity.py index c0a9fc6925..478296ba0e 100644 --- a/tests/rest/client/test_identity.py +++ b/tests/rest/client/test_identity.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/test_power_levels.py b/tests/rest/client/test_power_levels.py index 5256c11fe6..ba5ad47df5 100644 --- a/tests/rest/client/test_power_levels.py +++ b/tests/rest/client/test_power_levels.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/test_redactions.py b/tests/rest/client/test_redactions.py index e0c74591b6..dfd85221d0 100644 --- a/tests/rest/client/test_redactions.py +++ b/tests/rest/client/test_redactions.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/test_retention.py b/tests/rest/client/test_retention.py index f892a71228..e1a6e73e17 100644 --- a/tests/rest/client/test_retention.py +++ b/tests/rest/client/test_retention.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/test_third_party_rules.py b/tests/rest/client/test_third_party_rules.py index a7ebe0c3e9..e1fe72fc5d 100644 --- a/tests/rest/client/test_third_party_rules.py +++ b/tests/rest/client/test_third_party_rules.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the 'License'); diff --git a/tests/rest/client/v1/__init__.py b/tests/rest/client/v1/__init__.py index bfebb0f644..5e83dba2ed 100644 --- a/tests/rest/client/v1/__init__.py +++ b/tests/rest/client/v1/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/v1/test_directory.py b/tests/rest/client/v1/test_directory.py index edd1d184f8..8ed470490b 100644 --- a/tests/rest/client/v1/test_directory.py +++ b/tests/rest/client/v1/test_directory.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/v1/test_events.py b/tests/rest/client/v1/test_events.py index 87a18d2cb9..852bda408c 100644 --- a/tests/rest/client/v1/test_events.py +++ b/tests/rest/client/v1/test_events.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/v1/test_login.py b/tests/rest/client/v1/test_login.py index c7b79ab8a7..605b952316 100644 --- a/tests/rest/client/v1/test_login.py +++ b/tests/rest/client/v1/test_login.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019-2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/v1/test_presence.py b/tests/rest/client/v1/test_presence.py index c136827f79..3a050659ca 100644 --- a/tests/rest/client/v1/test_presence.py +++ b/tests/rest/client/v1/test_presence.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/v1/test_profile.py b/tests/rest/client/v1/test_profile.py index f3448c94dd..165ad33fb7 100644 --- a/tests/rest/client/v1/test_profile.py +++ b/tests/rest/client/v1/test_profile.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/v1/test_push_rule_attrs.py b/tests/rest/client/v1/test_push_rule_attrs.py index 2bc512d75e..d077616082 100644 --- a/tests/rest/client/v1/test_push_rule_attrs.py +++ b/tests/rest/client/v1/test_push_rule_attrs.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/v1/test_rooms.py b/tests/rest/client/v1/test_rooms.py index 4df20c90fd..92babf65e0 100644 --- a/tests/rest/client/v1/test_rooms.py +++ b/tests/rest/client/v1/test_rooms.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # Copyright 2018-2019 New Vector Ltd diff --git a/tests/rest/client/v1/test_typing.py b/tests/rest/client/v1/test_typing.py index 0b8f565121..0aad48a162 100644 --- a/tests/rest/client/v1/test_typing.py +++ b/tests/rest/client/v1/test_typing.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector # diff --git a/tests/rest/client/v1/utils.py b/tests/rest/client/v1/utils.py index a6a292b20c..ed55a640af 100644 --- a/tests/rest/client/v1/utils.py +++ b/tests/rest/client/v1/utils.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # Copyright 2018-2019 New Vector Ltd diff --git a/tests/rest/client/v2_alpha/test_account.py b/tests/rest/client/v2_alpha/test_account.py index e72b61963d..4ef19145d1 100644 --- a/tests/rest/client/v2_alpha/test_account.py +++ b/tests/rest/client/v2_alpha/test_account.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015-2016 OpenMarket Ltd # Copyright 2017-2018 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/tests/rest/client/v2_alpha/test_auth.py b/tests/rest/client/v2_alpha/test_auth.py index ed433d9333..485e3650c3 100644 --- a/tests/rest/client/v2_alpha/test_auth.py +++ b/tests/rest/client/v2_alpha/test_auth.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector # Copyright 2020-2021 The Matrix.org Foundation C.I.C # diff --git a/tests/rest/client/v2_alpha/test_capabilities.py b/tests/rest/client/v2_alpha/test_capabilities.py index 287a1a485c..874052c61c 100644 --- a/tests/rest/client/v2_alpha/test_capabilities.py +++ b/tests/rest/client/v2_alpha/test_capabilities.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/v2_alpha/test_filter.py b/tests/rest/client/v2_alpha/test_filter.py index f761c44936..c7e47725b7 100644 --- a/tests/rest/client/v2_alpha/test_filter.py +++ b/tests/rest/client/v2_alpha/test_filter.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/v2_alpha/test_password_policy.py b/tests/rest/client/v2_alpha/test_password_policy.py index 5ebc5707a5..6f07ff6cbb 100644 --- a/tests/rest/client/v2_alpha/test_password_policy.py +++ b/tests/rest/client/v2_alpha/test_password_policy.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/v2_alpha/test_register.py b/tests/rest/client/v2_alpha/test_register.py index cd60ea7081..054d4e4140 100644 --- a/tests/rest/client/v2_alpha/test_register.py +++ b/tests/rest/client/v2_alpha/test_register.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017-2018 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. diff --git a/tests/rest/client/v2_alpha/test_relations.py b/tests/rest/client/v2_alpha/test_relations.py index 21ee436b91..856aa8682f 100644 --- a/tests/rest/client/v2_alpha/test_relations.py +++ b/tests/rest/client/v2_alpha/test_relations.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/v2_alpha/test_shared_rooms.py b/tests/rest/client/v2_alpha/test_shared_rooms.py index dd83a1f8ff..cedb9614a8 100644 --- a/tests/rest/client/v2_alpha/test_shared_rooms.py +++ b/tests/rest/client/v2_alpha/test_shared_rooms.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Half-Shot # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/client/v2_alpha/test_sync.py b/tests/rest/client/v2_alpha/test_sync.py index 2dbf42397a..dbcbdf159a 100644 --- a/tests/rest/client/v2_alpha/test_sync.py +++ b/tests/rest/client/v2_alpha/test_sync.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018-2019 New Vector Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. # diff --git a/tests/rest/client/v2_alpha/test_upgrade_room.py b/tests/rest/client/v2_alpha/test_upgrade_room.py index d890d11863..5f3f15fc57 100644 --- a/tests/rest/client/v2_alpha/test_upgrade_room.py +++ b/tests/rest/client/v2_alpha/test_upgrade_room.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/key/v2/test_remote_key_resource.py b/tests/rest/key/v2/test_remote_key_resource.py index eb8687ce68..3b275bc23b 100644 --- a/tests/rest/key/v2/test_remote_key_resource.py +++ b/tests/rest/key/v2/test_remote_key_resource.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/media/__init__.py b/tests/rest/media/__init__.py index a354d38ca8..b1ee10cfcc 100644 --- a/tests/rest/media/__init__.py +++ b/tests/rest/media/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/media/v1/__init__.py b/tests/rest/media/v1/__init__.py index a354d38ca8..b1ee10cfcc 100644 --- a/tests/rest/media/v1/__init__.py +++ b/tests/rest/media/v1/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/media/v1/test_base.py b/tests/rest/media/v1/test_base.py index ebd7869208..f761e23f1b 100644 --- a/tests/rest/media/v1/test_base.py +++ b/tests/rest/media/v1/test_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/media/v1/test_media_storage.py b/tests/rest/media/v1/test_media_storage.py index 375f0b7977..4a213d13dd 100644 --- a/tests/rest/media/v1/test_media_storage.py +++ b/tests/rest/media/v1/test_media_storage.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/media/v1/test_url_preview.py b/tests/rest/media/v1/test_url_preview.py index 9067463e54..d3ef7bb4c6 100644 --- a/tests/rest/media/v1/test_url_preview.py +++ b/tests/rest/media/v1/test_url_preview.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/test_health.py b/tests/rest/test_health.py index 32acd93dc1..01d48c3860 100644 --- a/tests/rest/test_health.py +++ b/tests/rest/test_health.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/rest/test_well_known.py b/tests/rest/test_well_known.py index 14de0921be..ac0e427752 100644 --- a/tests/rest/test_well_known.py +++ b/tests/rest/test_well_known.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/scripts/test_new_matrix_user.py b/tests/scripts/test_new_matrix_user.py index 885b95a51f..6f3c365c9a 100644 --- a/tests/scripts/test_new_matrix_user.py +++ b/tests/scripts/test_new_matrix_user.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/server_notices/test_consent.py b/tests/server_notices/test_consent.py index 4dd5a36178..ac98259b7e 100644 --- a/tests/server_notices/test_consent.py +++ b/tests/server_notices/test_consent.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/server_notices/test_resource_limits_server_notices.py b/tests/server_notices/test_resource_limits_server_notices.py index 450b4ec710..d46521ccdc 100644 --- a/tests/server_notices/test_resource_limits_server_notices.py +++ b/tests/server_notices/test_resource_limits_server_notices.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018, 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/state/test_v2.py b/tests/state/test_v2.py index 66e3cafe8e..43fc79ca74 100644 --- a/tests/state/test_v2.py +++ b/tests/state/test_v2.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test__base.py b/tests/storage/test__base.py index 1ac4ebc61d..6339a43f0c 100644 --- a/tests/storage/test__base.py +++ b/tests/storage/test__base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2019 New Vector Ltd # diff --git a/tests/storage/test_account_data.py b/tests/storage/test_account_data.py index 38444e48e2..01af49a16b 100644 --- a/tests/storage/test_account_data.py +++ b/tests/storage/test_account_data.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_appservice.py b/tests/storage/test_appservice.py index e755a4db62..666bffe257 100644 --- a/tests/storage/test_appservice.py +++ b/tests/storage/test_appservice.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_base.py b/tests/storage/test_base.py index 54e9e7f6fe..3b45a7efd8 100644 --- a/tests/storage/test_base.py +++ b/tests/storage/test_base.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_cleanup_extrems.py b/tests/storage/test_cleanup_extrems.py index b02fb32ced..aa20588bbe 100644 --- a/tests/storage/test_cleanup_extrems.py +++ b/tests/storage/test_cleanup_extrems.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_client_ips.py b/tests/storage/test_client_ips.py index f7f75320ba..e57fce9694 100644 --- a/tests/storage/test_client_ips.py +++ b/tests/storage/test_client_ips.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/tests/storage/test_database.py b/tests/storage/test_database.py index a906d30e73..6fbac0ab14 100644 --- a/tests/storage/test_database.py +++ b/tests/storage/test_database.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_devices.py b/tests/storage/test_devices.py index ef4cf8d0f1..6790aa5242 100644 --- a/tests/storage/test_devices.py +++ b/tests/storage/test_devices.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016-2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_directory.py b/tests/storage/test_directory.py index 0db233fd68..41bef62ca8 100644 --- a/tests/storage/test_directory.py +++ b/tests/storage/test_directory.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_e2e_room_keys.py b/tests/storage/test_e2e_room_keys.py index 3d7760d5d9..9b6b425425 100644 --- a/tests/storage/test_e2e_room_keys.py +++ b/tests/storage/test_e2e_room_keys.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_end_to_end_keys.py b/tests/storage/test_end_to_end_keys.py index 1e54b940fd..3bf6e337f4 100644 --- a/tests/storage/test_end_to_end_keys.py +++ b/tests/storage/test_end_to_end_keys.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016-2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_event_chain.py b/tests/storage/test_event_chain.py index 16daa66cc9..d87f124c26 100644 --- a/tests/storage/test_event_chain.py +++ b/tests/storage/test_event_chain.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the 'License'); diff --git a/tests/storage/test_event_federation.py b/tests/storage/test_event_federation.py index d597d712d6..a0e2259478 100644 --- a/tests/storage/test_event_federation.py +++ b/tests/storage/test_event_federation.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the 'License'); diff --git a/tests/storage/test_event_metrics.py b/tests/storage/test_event_metrics.py index 7691f2d790..397e68fe0a 100644 --- a/tests/storage/test_event_metrics.py +++ b/tests/storage/test_event_metrics.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the 'License'); diff --git a/tests/storage/test_event_push_actions.py b/tests/storage/test_event_push_actions.py index 0289942f88..1930b37eda 100644 --- a/tests/storage/test_event_push_actions.py +++ b/tests/storage/test_event_push_actions.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016-2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_events.py b/tests/storage/test_events.py index ed898b8dbb..617bc8091f 100644 --- a/tests/storage/test_events.py +++ b/tests/storage/test_events.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_id_generators.py b/tests/storage/test_id_generators.py index 6c389fe9ac..792b1c44c1 100644 --- a/tests/storage/test_id_generators.py +++ b/tests/storage/test_id_generators.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_keys.py b/tests/storage/test_keys.py index 95f309fbbc..a94b5fd721 100644 --- a/tests/storage/test_keys.py +++ b/tests/storage/test_keys.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_main.py b/tests/storage/test_main.py index e9e3bca3bf..d2b7b89952 100644 --- a/tests/storage/test_main.py +++ b/tests/storage/test_main.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Awesome Technologies Innovationslabor GmbH # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_monthly_active_users.py b/tests/storage/test_monthly_active_users.py index 47556791f4..944dbc34a2 100644 --- a/tests/storage/test_monthly_active_users.py +++ b/tests/storage/test_monthly_active_users.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_profile.py b/tests/storage/test_profile.py index d18ceb41a9..8a446da848 100644 --- a/tests/storage/test_profile.py +++ b/tests/storage/test_profile.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_purge.py b/tests/storage/test_purge.py index 41af8c4847..54c5b470c7 100644 --- a/tests/storage/test_purge.py +++ b/tests/storage/test_purge.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_redaction.py b/tests/storage/test_redaction.py index 2d2f58903c..bb31ab756d 100644 --- a/tests/storage/test_redaction.py +++ b/tests/storage/test_redaction.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_registration.py b/tests/storage/test_registration.py index c82cf15bc2..9748065282 100644 --- a/tests/storage/test_registration.py +++ b/tests/storage/test_registration.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_room.py b/tests/storage/test_room.py index 0089d33c93..70257bf210 100644 --- a/tests/storage/test_room.py +++ b/tests/storage/test_room.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_roommember.py b/tests/storage/test_roommember.py index d2aed66f6d..9fa968f6bb 100644 --- a/tests/storage/test_roommember.py +++ b/tests/storage/test_roommember.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2019 The Matrix.org Foundation C.I.C. # diff --git a/tests/storage/test_state.py b/tests/storage/test_state.py index f06b452fa9..8695264595 100644 --- a/tests/storage/test_state.py +++ b/tests/storage/test_state.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018-2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_transactions.py b/tests/storage/test_transactions.py index 8e817e2c7f..b7f7eae8d0 100644 --- a/tests/storage/test_transactions.py +++ b/tests/storage/test_transactions.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/storage/test_user_directory.py b/tests/storage/test_user_directory.py index 019c5b7b14..222e5d129d 100644 --- a/tests/storage/test_user_directory.py +++ b/tests/storage/test_user_directory.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018-2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/test_distributor.py b/tests/test_distributor.py index 6a6cf709f6..f8341041ee 100644 --- a/tests/test_distributor.py +++ b/tests/test_distributor.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/tests/test_event_auth.py b/tests/test_event_auth.py index b5f18344dc..88888319cc 100644 --- a/tests/test_event_auth.py +++ b/tests/test_event_auth.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/test_federation.py b/tests/test_federation.py index 8928597d17..86a44a13da 100644 --- a/tests/test_federation.py +++ b/tests/test_federation.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/test_mau.py b/tests/test_mau.py index 7d92a16a8d..fa6ef92b3b 100644 --- a/tests/test_mau.py +++ b/tests/test_mau.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/test_metrics.py b/tests/test_metrics.py index f696fcf89e..b4574b2ffe 100644 --- a/tests/test_metrics.py +++ b/tests/test_metrics.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # Copyright 2019 Matrix.org Foundation C.I.C. # diff --git a/tests/test_phone_home.py b/tests/test_phone_home.py index 0f800a075b..09707a74d7 100644 --- a/tests/test_phone_home.py +++ b/tests/test_phone_home.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/test_preview.py b/tests/test_preview.py index ea83299918..cac3d81ac1 100644 --- a/tests/test_preview.py +++ b/tests/test_preview.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/test_state.py b/tests/test_state.py index 0d626f49f6..62f7095873 100644 --- a/tests/test_state.py +++ b/tests/test_state.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/test_test_utils.py b/tests/test_test_utils.py index b921ac52c0..f2ef1c6051 100644 --- a/tests/test_test_utils.py +++ b/tests/test_test_utils.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/test_types.py b/tests/test_types.py index acdeea7a09..d7881021d3 100644 --- a/tests/test_types.py +++ b/tests/test_types.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/test_utils/__init__.py b/tests/test_utils/__init__.py index b557ffd692..be6302d170 100644 --- a/tests/test_utils/__init__.py +++ b/tests/test_utils/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # Copyright 2020 The Matrix.org Foundation C.I.C # diff --git a/tests/test_utils/event_injection.py b/tests/test_utils/event_injection.py index 3dfbf8f8a9..e9ec9e085b 100644 --- a/tests/test_utils/event_injection.py +++ b/tests/test_utils/event_injection.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # Copyright 2020 The Matrix.org Foundation C.I.C # diff --git a/tests/test_utils/html_parsers.py b/tests/test_utils/html_parsers.py index ad563eb3f0..1fbb38f4be 100644 --- a/tests/test_utils/html_parsers.py +++ b/tests/test_utils/html_parsers.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/test_utils/logging_setup.py b/tests/test_utils/logging_setup.py index 74568b34f8..51a197a8c6 100644 --- a/tests/test_utils/logging_setup.py +++ b/tests/test_utils/logging_setup.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/test_visibility.py b/tests/test_visibility.py index e502ac197e..94b19788d7 100644 --- a/tests/test_visibility.py +++ b/tests/test_visibility.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/unittest.py b/tests/unittest.py index 92764434bd..d890ad981f 100644 --- a/tests/unittest.py +++ b/tests/unittest.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018 New Vector # Copyright 2019 Matrix.org Federation C.I.C diff --git a/tests/util/__init__.py b/tests/util/__init__.py index bfebb0f644..5e83dba2ed 100644 --- a/tests/util/__init__.py +++ b/tests/util/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/caches/__init__.py b/tests/util/caches/__init__.py index 451dae3b6c..830e2dfe91 100644 --- a/tests/util/caches/__init__.py +++ b/tests/util/caches/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/caches/test_cached_call.py b/tests/util/caches/test_cached_call.py index f349b5ced0..80b97167ba 100644 --- a/tests/util/caches/test_cached_call.py +++ b/tests/util/caches/test_cached_call.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/caches/test_deferred_cache.py b/tests/util/caches/test_deferred_cache.py index c24c33ee91..54a88a8325 100644 --- a/tests/util/caches/test_deferred_cache.py +++ b/tests/util/caches/test_deferred_cache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/caches/test_descriptors.py b/tests/util/caches/test_descriptors.py index 2d1f9360e0..40cd98e2d8 100644 --- a/tests/util/caches/test_descriptors.py +++ b/tests/util/caches/test_descriptors.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/tests/util/caches/test_ttlcache.py b/tests/util/caches/test_ttlcache.py index 23018081e5..fe8314057d 100644 --- a/tests/util/caches/test_ttlcache.py +++ b/tests/util/caches/test_ttlcache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/test_async_utils.py b/tests/util/test_async_utils.py index 17fd86d02d..069f875962 100644 --- a/tests/util/test_async_utils.py +++ b/tests/util/test_async_utils.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/test_dict_cache.py b/tests/util/test_dict_cache.py index 2f41333f4c..bee66dee43 100644 --- a/tests/util/test_dict_cache.py +++ b/tests/util/test_dict_cache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/test_expiring_cache.py b/tests/util/test_expiring_cache.py index 49ffeebd0e..e6e13ba06c 100644 --- a/tests/util/test_expiring_cache.py +++ b/tests/util/test_expiring_cache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2017 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/test_file_consumer.py b/tests/util/test_file_consumer.py index d1372f6bc2..3bb4695405 100644 --- a/tests/util/test_file_consumer.py +++ b/tests/util/test_file_consumer.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/test_itertools.py b/tests/util/test_itertools.py index e931a7ec18..1bd0b45d94 100644 --- a/tests/util/test_itertools.py +++ b/tests/util/test_itertools.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/test_linearizer.py b/tests/util/test_linearizer.py index 0e52811948..c4a3917b23 100644 --- a/tests/util/test_linearizer.py +++ b/tests/util/test_linearizer.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # Copyright 2018 New Vector Ltd # diff --git a/tests/util/test_logformatter.py b/tests/util/test_logformatter.py index 0fb60caacb..a2e08281e6 100644 --- a/tests/util/test_logformatter.py +++ b/tests/util/test_logformatter.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2018 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/test_lrucache.py b/tests/util/test_lrucache.py index ce4f1cc30a..df3e27779f 100644 --- a/tests/util/test_lrucache.py +++ b/tests/util/test_lrucache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/test_ratelimitutils.py b/tests/util/test_ratelimitutils.py index 3fed55090a..34aaffe859 100644 --- a/tests/util/test_ratelimitutils.py +++ b/tests/util/test_ratelimitutils.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/test_retryutils.py b/tests/util/test_retryutils.py index 5f46ed0cef..9b2be83a43 100644 --- a/tests/util/test_retryutils.py +++ b/tests/util/test_retryutils.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2019 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/test_rwlock.py b/tests/util/test_rwlock.py index d3dea3b52a..a10071c70f 100644 --- a/tests/util/test_rwlock.py +++ b/tests/util/test_rwlock.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/test_stringutils.py b/tests/util/test_stringutils.py index 8491f7cc83..f7fecd9cf3 100644 --- a/tests/util/test_stringutils.py +++ b/tests/util/test_stringutils.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/test_threepids.py b/tests/util/test_threepids.py index 5513724d87..d957b953bb 100644 --- a/tests/util/test_threepids.py +++ b/tests/util/test_threepids.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2020 Dirk Klimpel # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/test_treecache.py b/tests/util/test_treecache.py index a5f2261208..3b077af27e 100644 --- a/tests/util/test_treecache.py +++ b/tests/util/test_treecache.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/util/test_wheel_timer.py b/tests/util/test_wheel_timer.py index 03201a4d9b..0d5039de04 100644 --- a/tests/util/test_wheel_timer.py +++ b/tests/util/test_wheel_timer.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/tests/utils.py b/tests/utils.py index c78d3e5ba7..af6b32fc66 100644 --- a/tests/utils.py +++ b/tests/utils.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2018-2019 New Vector Ltd # -- cgit 1.5.1 From c9a2b5d4022feab97fa1bce1d67360d09a6e3dcc Mon Sep 17 00:00:00 2001 From: rkfg Date: Wed, 14 Apr 2021 18:30:59 +0300 Subject: More robust handling of the Content-Type header for thumbnail generation (#9788) Signed-off-by: Sergey Shpikin --- changelog.d/9788.bugfix | 1 + synapse/config/repository.py | 1 + synapse/rest/media/v1/media_repository.py | 3 +++ 3 files changed, 5 insertions(+) create mode 100644 changelog.d/9788.bugfix diff --git a/changelog.d/9788.bugfix b/changelog.d/9788.bugfix new file mode 100644 index 0000000000..edb58fbd5b --- /dev/null +++ b/changelog.d/9788.bugfix @@ -0,0 +1 @@ +Fix thumbnail generation for some sites with non-standard content types. Contributed by @rkfg. diff --git a/synapse/config/repository.py b/synapse/config/repository.py index 146bc55d6f..c78a83abe1 100644 --- a/synapse/config/repository.py +++ b/synapse/config/repository.py @@ -70,6 +70,7 @@ def parse_thumbnail_requirements(thumbnail_sizes): jpeg_thumbnail = ThumbnailRequirement(width, height, method, "image/jpeg") png_thumbnail = ThumbnailRequirement(width, height, method, "image/png") requirements.setdefault("image/jpeg", []).append(jpeg_thumbnail) + requirements.setdefault("image/jpg", []).append(jpeg_thumbnail) requirements.setdefault("image/webp", []).append(jpeg_thumbnail) requirements.setdefault("image/gif", []).append(png_thumbnail) requirements.setdefault("image/png", []).append(png_thumbnail) diff --git a/synapse/rest/media/v1/media_repository.py b/synapse/rest/media/v1/media_repository.py index 87e3645ddc..e8a875b900 100644 --- a/synapse/rest/media/v1/media_repository.py +++ b/synapse/rest/media/v1/media_repository.py @@ -467,6 +467,9 @@ class MediaRepository: return media_info def _get_thumbnail_requirements(self, media_type): + scpos = media_type.find(";") + if scpos > 0: + media_type = media_type[:scpos] return self.thumbnail_requirements.get(media_type, ()) def _generate_thumbnail( -- cgit 1.5.1 From 00a6db967655daf1d6db290b7e0d2bb53827ade9 Mon Sep 17 00:00:00 2001 From: Erik Johnston Date: Wed, 14 Apr 2021 17:06:06 +0100 Subject: Move some replication processing out of generic_worker (#9796) Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com> --- changelog.d/9796.misc | 1 + synapse/app/generic_worker.py | 470 +------------------------------------- synapse/handlers/presence.py | 246 ++++++++++++++++++++ synapse/replication/tcp/client.py | 231 ++++++++++++++++++- synapse/server.py | 13 +- tests/replication/_base.py | 8 +- 6 files changed, 486 insertions(+), 483 deletions(-) create mode 100644 changelog.d/9796.misc diff --git a/changelog.d/9796.misc b/changelog.d/9796.misc new file mode 100644 index 0000000000..59bb1813c3 --- /dev/null +++ b/changelog.d/9796.misc @@ -0,0 +1 @@ +Move some replication processing out of `generic_worker`. diff --git a/synapse/app/generic_worker.py b/synapse/app/generic_worker.py index e35e17492c..28e3b1aa3c 100644 --- a/synapse/app/generic_worker.py +++ b/synapse/app/generic_worker.py @@ -13,12 +13,9 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. -import contextlib import logging import sys -from typing import Dict, Iterable, Optional, Set - -from typing_extensions import ContextManager +from typing import Dict, Iterable, Optional from twisted.internet import address from twisted.web.resource import IResource @@ -40,24 +37,13 @@ from synapse.config._base import ConfigError from synapse.config.homeserver import HomeServerConfig from synapse.config.logger import setup_logging from synapse.config.server import ListenerConfig -from synapse.federation import send_queue from synapse.federation.transport.server import TransportLayerServer -from synapse.handlers.presence import ( - BasePresenceHandler, - PresenceState, - get_interested_parties, -) from synapse.http.server import JsonResource, OptionsResource from synapse.http.servlet import RestServlet, parse_json_object_from_request from synapse.http.site import SynapseSite from synapse.logging.context import LoggingContext from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy -from synapse.metrics.background_process_metrics import run_as_background_process from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource -from synapse.replication.http.presence import ( - ReplicationBumpPresenceActiveTime, - ReplicationPresenceSetState, -) from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage.account_data import SlavedAccountDataStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore @@ -77,19 +63,6 @@ from synapse.replication.slave.storage.receipts import SlavedReceiptsStore from synapse.replication.slave.storage.registration import SlavedRegistrationStore from synapse.replication.slave.storage.room import RoomStore from synapse.replication.slave.storage.transactions import SlavedTransactionStore -from synapse.replication.tcp.client import ReplicationDataHandler -from synapse.replication.tcp.commands import ClearUserSyncsCommand -from synapse.replication.tcp.streams import ( - AccountDataStream, - DeviceListsStream, - GroupServerStream, - PresenceStream, - PushersStream, - PushRulesStream, - ReceiptsStream, - TagAccountDataStream, - ToDeviceStream, -) from synapse.rest.admin import register_servlets_for_media_repo from synapse.rest.client.v1 import events, login, room from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet @@ -128,7 +101,7 @@ from synapse.rest.client.versions import VersionsRestServlet from synapse.rest.health import HealthResource from synapse.rest.key.v2 import KeyApiV2Resource from synapse.rest.synapse.client import build_synapse_client_resource_tree -from synapse.server import HomeServer, cache_in_self +from synapse.server import HomeServer from synapse.storage.databases.main.censor_events import CensorEventsStore from synapse.storage.databases.main.client_ips import ClientIpWorkerStore from synapse.storage.databases.main.e2e_room_keys import EndToEndRoomKeyStore @@ -137,14 +110,11 @@ from synapse.storage.databases.main.metrics import ServerMetricsStore from synapse.storage.databases.main.monthly_active_users import ( MonthlyActiveUsersWorkerStore, ) -from synapse.storage.databases.main.presence import UserPresenceState from synapse.storage.databases.main.search import SearchWorkerStore from synapse.storage.databases.main.stats import StatsStore from synapse.storage.databases.main.transactions import TransactionWorkerStore from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore from synapse.storage.databases.main.user_directory import UserDirectoryStore -from synapse.types import ReadReceipt -from synapse.util.async_helpers import Linearizer from synapse.util.httpresourcetree import create_resource_tree from synapse.util.versionstring import get_version_string @@ -264,214 +234,6 @@ class KeyUploadServlet(RestServlet): return 200, {"one_time_key_counts": result} -class _NullContextManager(ContextManager[None]): - """A context manager which does nothing.""" - - def __exit__(self, exc_type, exc_val, exc_tb): - pass - - -UPDATE_SYNCING_USERS_MS = 10 * 1000 - - -class GenericWorkerPresence(BasePresenceHandler): - def __init__(self, hs): - super().__init__(hs) - self.hs = hs - self.is_mine_id = hs.is_mine_id - - self.presence_router = hs.get_presence_router() - self._presence_enabled = hs.config.use_presence - - # The number of ongoing syncs on this process, by user id. - # Empty if _presence_enabled is false. - self._user_to_num_current_syncs = {} # type: Dict[str, int] - - self.notifier = hs.get_notifier() - self.instance_id = hs.get_instance_id() - - # user_id -> last_sync_ms. Lists the users that have stopped syncing - # but we haven't notified the master of that yet - self.users_going_offline = {} - - self._bump_active_client = ReplicationBumpPresenceActiveTime.make_client(hs) - self._set_state_client = ReplicationPresenceSetState.make_client(hs) - - self._send_stop_syncing_loop = self.clock.looping_call( - self.send_stop_syncing, UPDATE_SYNCING_USERS_MS - ) - - self._busy_presence_enabled = hs.config.experimental.msc3026_enabled - - hs.get_reactor().addSystemEventTrigger( - "before", - "shutdown", - run_as_background_process, - "generic_presence.on_shutdown", - self._on_shutdown, - ) - - def _on_shutdown(self): - if self._presence_enabled: - self.hs.get_tcp_replication().send_command( - ClearUserSyncsCommand(self.instance_id) - ) - - def send_user_sync(self, user_id, is_syncing, last_sync_ms): - if self._presence_enabled: - self.hs.get_tcp_replication().send_user_sync( - self.instance_id, user_id, is_syncing, last_sync_ms - ) - - def mark_as_coming_online(self, user_id): - """A user has started syncing. Send a UserSync to the master, unless they - had recently stopped syncing. - - Args: - user_id (str) - """ - going_offline = self.users_going_offline.pop(user_id, None) - if not going_offline: - # Safe to skip because we haven't yet told the master they were offline - self.send_user_sync(user_id, True, self.clock.time_msec()) - - def mark_as_going_offline(self, user_id): - """A user has stopped syncing. We wait before notifying the master as - its likely they'll come back soon. This allows us to avoid sending - a stopped syncing immediately followed by a started syncing notification - to the master - - Args: - user_id (str) - """ - self.users_going_offline[user_id] = self.clock.time_msec() - - def send_stop_syncing(self): - """Check if there are any users who have stopped syncing a while ago - and haven't come back yet. If there are poke the master about them. - """ - now = self.clock.time_msec() - for user_id, last_sync_ms in list(self.users_going_offline.items()): - if now - last_sync_ms > UPDATE_SYNCING_USERS_MS: - self.users_going_offline.pop(user_id, None) - self.send_user_sync(user_id, False, last_sync_ms) - - async def user_syncing( - self, user_id: str, affect_presence: bool - ) -> ContextManager[None]: - """Record that a user is syncing. - - Called by the sync and events servlets to record that a user has connected to - this worker and is waiting for some events. - """ - if not affect_presence or not self._presence_enabled: - return _NullContextManager() - - curr_sync = self._user_to_num_current_syncs.get(user_id, 0) - self._user_to_num_current_syncs[user_id] = curr_sync + 1 - - # If we went from no in flight sync to some, notify replication - if self._user_to_num_current_syncs[user_id] == 1: - self.mark_as_coming_online(user_id) - - def _end(): - # We check that the user_id is in user_to_num_current_syncs because - # user_to_num_current_syncs may have been cleared if we are - # shutting down. - if user_id in self._user_to_num_current_syncs: - self._user_to_num_current_syncs[user_id] -= 1 - - # If we went from one in flight sync to non, notify replication - if self._user_to_num_current_syncs[user_id] == 0: - self.mark_as_going_offline(user_id) - - @contextlib.contextmanager - def _user_syncing(): - try: - yield - finally: - _end() - - return _user_syncing() - - async def notify_from_replication(self, states, stream_id): - parties = await get_interested_parties(self.store, self.presence_router, states) - room_ids_to_states, users_to_states = parties - - self.notifier.on_new_event( - "presence_key", - stream_id, - rooms=room_ids_to_states.keys(), - users=users_to_states.keys(), - ) - - async def process_replication_rows(self, token, rows): - states = [ - UserPresenceState( - row.user_id, - row.state, - row.last_active_ts, - row.last_federation_update_ts, - row.last_user_sync_ts, - row.status_msg, - row.currently_active, - ) - for row in rows - ] - - for state in states: - self.user_to_current_state[state.user_id] = state - - stream_id = token - await self.notify_from_replication(states, stream_id) - - def get_currently_syncing_users_for_replication(self) -> Iterable[str]: - return [ - user_id - for user_id, count in self._user_to_num_current_syncs.items() - if count > 0 - ] - - async def set_state(self, target_user, state, ignore_status_msg=False): - """Set the presence state of the user.""" - presence = state["presence"] - - valid_presence = ( - PresenceState.ONLINE, - PresenceState.UNAVAILABLE, - PresenceState.OFFLINE, - PresenceState.BUSY, - ) - - if presence not in valid_presence or ( - presence == PresenceState.BUSY and not self._busy_presence_enabled - ): - raise SynapseError(400, "Invalid presence state") - - user_id = target_user.to_string() - - # If presence is disabled, no-op - if not self.hs.config.use_presence: - return - - # Proxy request to master - await self._set_state_client( - user_id=user_id, state=state, ignore_status_msg=ignore_status_msg - ) - - async def bump_presence_active_time(self, user): - """We've seen the user do something that indicates they're interacting - with the app. - """ - # If presence is disabled, no-op - if not self.hs.config.use_presence: - return - - # Proxy request to master - user_id = user.to_string() - await self._bump_active_client(user_id=user_id) - - class GenericWorkerSlavedStore( # FIXME(#3714): We need to add UserDirectoryStore as we write directly # rather than going via the correct worker. @@ -657,234 +419,6 @@ class GenericWorkerServer(HomeServer): self.get_tcp_replication().start_replication(self) - @cache_in_self - def get_replication_data_handler(self): - return GenericWorkerReplicationHandler(self) - - @cache_in_self - def get_presence_handler(self): - return GenericWorkerPresence(self) - - -class GenericWorkerReplicationHandler(ReplicationDataHandler): - def __init__(self, hs): - super().__init__(hs) - - self.store = hs.get_datastore() - self.presence_handler = hs.get_presence_handler() # type: GenericWorkerPresence - self.notifier = hs.get_notifier() - - self.notify_pushers = hs.config.start_pushers - self.pusher_pool = hs.get_pusherpool() - - self.send_handler = None # type: Optional[FederationSenderHandler] - if hs.config.send_federation: - self.send_handler = FederationSenderHandler(hs) - - async def on_rdata(self, stream_name, instance_name, token, rows): - await super().on_rdata(stream_name, instance_name, token, rows) - await self._process_and_notify(stream_name, instance_name, token, rows) - - async def _process_and_notify(self, stream_name, instance_name, token, rows): - try: - if self.send_handler: - await self.send_handler.process_replication_rows( - stream_name, token, rows - ) - - if stream_name == PushRulesStream.NAME: - self.notifier.on_new_event( - "push_rules_key", token, users=[row.user_id for row in rows] - ) - elif stream_name in (AccountDataStream.NAME, TagAccountDataStream.NAME): - self.notifier.on_new_event( - "account_data_key", token, users=[row.user_id for row in rows] - ) - elif stream_name == ReceiptsStream.NAME: - self.notifier.on_new_event( - "receipt_key", token, rooms=[row.room_id for row in rows] - ) - await self.pusher_pool.on_new_receipts( - token, token, {row.room_id for row in rows} - ) - elif stream_name == ToDeviceStream.NAME: - entities = [row.entity for row in rows if row.entity.startswith("@")] - if entities: - self.notifier.on_new_event("to_device_key", token, users=entities) - elif stream_name == DeviceListsStream.NAME: - all_room_ids = set() # type: Set[str] - for row in rows: - if row.entity.startswith("@"): - room_ids = await self.store.get_rooms_for_user(row.entity) - all_room_ids.update(room_ids) - self.notifier.on_new_event("device_list_key", token, rooms=all_room_ids) - elif stream_name == PresenceStream.NAME: - await self.presence_handler.process_replication_rows(token, rows) - elif stream_name == GroupServerStream.NAME: - self.notifier.on_new_event( - "groups_key", token, users=[row.user_id for row in rows] - ) - elif stream_name == PushersStream.NAME: - for row in rows: - if row.deleted: - self.stop_pusher(row.user_id, row.app_id, row.pushkey) - else: - await self.start_pusher(row.user_id, row.app_id, row.pushkey) - except Exception: - logger.exception("Error processing replication") - - async def on_position(self, stream_name: str, instance_name: str, token: int): - await super().on_position(stream_name, instance_name, token) - # Also call on_rdata to ensure that stream positions are properly reset. - await self.on_rdata(stream_name, instance_name, token, []) - - def stop_pusher(self, user_id, app_id, pushkey): - if not self.notify_pushers: - return - - key = "%s:%s" % (app_id, pushkey) - pushers_for_user = self.pusher_pool.pushers.get(user_id, {}) - pusher = pushers_for_user.pop(key, None) - if pusher is None: - return - logger.info("Stopping pusher %r / %r", user_id, key) - pusher.on_stop() - - async def start_pusher(self, user_id, app_id, pushkey): - if not self.notify_pushers: - return - - key = "%s:%s" % (app_id, pushkey) - logger.info("Starting pusher %r / %r", user_id, key) - return await self.pusher_pool.start_pusher_by_id(app_id, pushkey, user_id) - - def on_remote_server_up(self, server: str): - """Called when get a new REMOTE_SERVER_UP command.""" - - # Let's wake up the transaction queue for the server in case we have - # pending stuff to send to it. - if self.send_handler: - self.send_handler.wake_destination(server) - - -class FederationSenderHandler: - """Processes the fedration replication stream - - This class is only instantiate on the worker responsible for sending outbound - federation transactions. It receives rows from the replication stream and forwards - the appropriate entries to the FederationSender class. - """ - - def __init__(self, hs: GenericWorkerServer): - self.store = hs.get_datastore() - self._is_mine_id = hs.is_mine_id - self.federation_sender = hs.get_federation_sender() - self._hs = hs - - # Stores the latest position in the federation stream we've gotten up - # to. This is always set before we use it. - self.federation_position = None - - self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer") - - def wake_destination(self, server: str): - self.federation_sender.wake_destination(server) - - async def process_replication_rows(self, stream_name, token, rows): - # The federation stream contains things that we want to send out, e.g. - # presence, typing, etc. - if stream_name == "federation": - send_queue.process_rows_for_federation(self.federation_sender, rows) - await self.update_token(token) - - # ... and when new receipts happen - elif stream_name == ReceiptsStream.NAME: - await self._on_new_receipts(rows) - - # ... as well as device updates and messages - elif stream_name == DeviceListsStream.NAME: - # The entities are either user IDs (starting with '@') whose devices - # have changed, or remote servers that we need to tell about - # changes. - hosts = {row.entity for row in rows if not row.entity.startswith("@")} - for host in hosts: - self.federation_sender.send_device_messages(host) - - elif stream_name == ToDeviceStream.NAME: - # The to_device stream includes stuff to be pushed to both local - # clients and remote servers, so we ignore entities that start with - # '@' (since they'll be local users rather than destinations). - hosts = {row.entity for row in rows if not row.entity.startswith("@")} - for host in hosts: - self.federation_sender.send_device_messages(host) - - async def _on_new_receipts(self, rows): - """ - Args: - rows (Iterable[synapse.replication.tcp.streams.ReceiptsStream.ReceiptsStreamRow]): - new receipts to be processed - """ - for receipt in rows: - # we only want to send on receipts for our own users - if not self._is_mine_id(receipt.user_id): - continue - receipt_info = ReadReceipt( - receipt.room_id, - receipt.receipt_type, - receipt.user_id, - [receipt.event_id], - receipt.data, - ) - await self.federation_sender.send_read_receipt(receipt_info) - - async def update_token(self, token): - """Update the record of where we have processed to in the federation stream. - - Called after we have processed a an update received over replication. Sends - a FEDERATION_ACK back to the master, and stores the token that we have processed - in `federation_stream_position` so that we can restart where we left off. - """ - self.federation_position = token - - # We save and send the ACK to master asynchronously, so we don't block - # processing on persistence. We don't need to do this operation for - # every single RDATA we receive, we just need to do it periodically. - - if self._fed_position_linearizer.is_queued(None): - # There is already a task queued up to save and send the token, so - # no need to queue up another task. - return - - run_as_background_process("_save_and_send_ack", self._save_and_send_ack) - - async def _save_and_send_ack(self): - """Save the current federation position in the database and send an ACK - to master with where we're up to. - """ - try: - # We linearize here to ensure we don't have races updating the token - # - # XXX this appears to be redundant, since the ReplicationCommandHandler - # has a linearizer which ensures that we only process one line of - # replication data at a time. Should we remove it, or is it doing useful - # service for robustness? Or could we replace it with an assertion that - # we're not being re-entered? - - with (await self._fed_position_linearizer.queue(None)): - # We persist and ack the same position, so we take a copy of it - # here as otherwise it can get modified from underneath us. - current_position = self.federation_position - - await self.store.update_federation_out_pos( - "federation", current_position - ) - - # We ACK this token over replication so that the master can drop - # its in memory queues - self._hs.get_tcp_replication().send_federation_ack(current_position) - except Exception: - logger.exception("Error updating federation stream position") - def start(config_options): try: diff --git a/synapse/handlers/presence.py b/synapse/handlers/presence.py index 251b48148d..e120dd1f48 100644 --- a/synapse/handlers/presence.py +++ b/synapse/handlers/presence.py @@ -22,6 +22,7 @@ The methods that define policy are: - should_notify """ import abc +import contextlib import logging from contextlib import contextmanager from typing import ( @@ -48,6 +49,11 @@ from synapse.logging.context import run_in_background from synapse.logging.utils import log_function from synapse.metrics import LaterGauge from synapse.metrics.background_process_metrics import run_as_background_process +from synapse.replication.http.presence import ( + ReplicationBumpPresenceActiveTime, + ReplicationPresenceSetState, +) +from synapse.replication.tcp.commands import ClearUserSyncsCommand from synapse.state import StateHandler from synapse.storage.databases.main import DataStore from synapse.types import Collection, JsonDict, UserID, get_domain_from_id @@ -104,6 +110,10 @@ FEDERATION_PING_INTERVAL = 25 * 60 * 1000 # are dead. EXTERNAL_PROCESS_EXPIRY = 5 * 60 * 1000 +# Delay before a worker tells the presence handler that a user has stopped +# syncing. +UPDATE_SYNCING_USERS_MS = 10 * 1000 + assert LAST_ACTIVE_GRANULARITY < IDLE_TIMER @@ -208,6 +218,242 @@ class BasePresenceHandler(abc.ABC): with the app. """ + async def update_external_syncs_row( + self, process_id, user_id, is_syncing, sync_time_msec + ): + """Update the syncing users for an external process as a delta. + + This is a no-op when presence is handled by a different worker. + + Args: + process_id (str): An identifier for the process the users are + syncing against. This allows synapse to process updates + as user start and stop syncing against a given process. + user_id (str): The user who has started or stopped syncing + is_syncing (bool): Whether or not the user is now syncing + sync_time_msec(int): Time in ms when the user was last syncing + """ + pass + + async def update_external_syncs_clear(self, process_id): + """Marks all users that had been marked as syncing by a given process + as offline. + + Used when the process has stopped/disappeared. + + This is a no-op when presence is handled by a different worker. + """ + pass + + async def process_replication_rows(self, token, rows): + """Process presence stream rows received over replication.""" + pass + + +class _NullContextManager(ContextManager[None]): + """A context manager which does nothing.""" + + def __exit__(self, exc_type, exc_val, exc_tb): + pass + + +class WorkerPresenceHandler(BasePresenceHandler): + def __init__(self, hs): + super().__init__(hs) + self.hs = hs + self.is_mine_id = hs.is_mine_id + + self.presence_router = hs.get_presence_router() + self._presence_enabled = hs.config.use_presence + + # The number of ongoing syncs on this process, by user id. + # Empty if _presence_enabled is false. + self._user_to_num_current_syncs = {} # type: Dict[str, int] + + self.notifier = hs.get_notifier() + self.instance_id = hs.get_instance_id() + + # user_id -> last_sync_ms. Lists the users that have stopped syncing + # but we haven't notified the master of that yet + self.users_going_offline = {} + + self._bump_active_client = ReplicationBumpPresenceActiveTime.make_client(hs) + self._set_state_client = ReplicationPresenceSetState.make_client(hs) + + self._send_stop_syncing_loop = self.clock.looping_call( + self.send_stop_syncing, UPDATE_SYNCING_USERS_MS + ) + + self._busy_presence_enabled = hs.config.experimental.msc3026_enabled + + hs.get_reactor().addSystemEventTrigger( + "before", + "shutdown", + run_as_background_process, + "generic_presence.on_shutdown", + self._on_shutdown, + ) + + def _on_shutdown(self): + if self._presence_enabled: + self.hs.get_tcp_replication().send_command( + ClearUserSyncsCommand(self.instance_id) + ) + + def send_user_sync(self, user_id, is_syncing, last_sync_ms): + if self._presence_enabled: + self.hs.get_tcp_replication().send_user_sync( + self.instance_id, user_id, is_syncing, last_sync_ms + ) + + def mark_as_coming_online(self, user_id): + """A user has started syncing. Send a UserSync to the master, unless they + had recently stopped syncing. + + Args: + user_id (str) + """ + going_offline = self.users_going_offline.pop(user_id, None) + if not going_offline: + # Safe to skip because we haven't yet told the master they were offline + self.send_user_sync(user_id, True, self.clock.time_msec()) + + def mark_as_going_offline(self, user_id): + """A user has stopped syncing. We wait before notifying the master as + its likely they'll come back soon. This allows us to avoid sending + a stopped syncing immediately followed by a started syncing notification + to the master + + Args: + user_id (str) + """ + self.users_going_offline[user_id] = self.clock.time_msec() + + def send_stop_syncing(self): + """Check if there are any users who have stopped syncing a while ago + and haven't come back yet. If there are poke the master about them. + """ + now = self.clock.time_msec() + for user_id, last_sync_ms in list(self.users_going_offline.items()): + if now - last_sync_ms > UPDATE_SYNCING_USERS_MS: + self.users_going_offline.pop(user_id, None) + self.send_user_sync(user_id, False, last_sync_ms) + + async def user_syncing( + self, user_id: str, affect_presence: bool + ) -> ContextManager[None]: + """Record that a user is syncing. + + Called by the sync and events servlets to record that a user has connected to + this worker and is waiting for some events. + """ + if not affect_presence or not self._presence_enabled: + return _NullContextManager() + + curr_sync = self._user_to_num_current_syncs.get(user_id, 0) + self._user_to_num_current_syncs[user_id] = curr_sync + 1 + + # If we went from no in flight sync to some, notify replication + if self._user_to_num_current_syncs[user_id] == 1: + self.mark_as_coming_online(user_id) + + def _end(): + # We check that the user_id is in user_to_num_current_syncs because + # user_to_num_current_syncs may have been cleared if we are + # shutting down. + if user_id in self._user_to_num_current_syncs: + self._user_to_num_current_syncs[user_id] -= 1 + + # If we went from one in flight sync to non, notify replication + if self._user_to_num_current_syncs[user_id] == 0: + self.mark_as_going_offline(user_id) + + @contextlib.contextmanager + def _user_syncing(): + try: + yield + finally: + _end() + + return _user_syncing() + + async def notify_from_replication(self, states, stream_id): + parties = await get_interested_parties(self.store, self.presence_router, states) + room_ids_to_states, users_to_states = parties + + self.notifier.on_new_event( + "presence_key", + stream_id, + rooms=room_ids_to_states.keys(), + users=users_to_states.keys(), + ) + + async def process_replication_rows(self, token, rows): + states = [ + UserPresenceState( + row.user_id, + row.state, + row.last_active_ts, + row.last_federation_update_ts, + row.last_user_sync_ts, + row.status_msg, + row.currently_active, + ) + for row in rows + ] + + for state in states: + self.user_to_current_state[state.user_id] = state + + stream_id = token + await self.notify_from_replication(states, stream_id) + + def get_currently_syncing_users_for_replication(self) -> Iterable[str]: + return [ + user_id + for user_id, count in self._user_to_num_current_syncs.items() + if count > 0 + ] + + async def set_state(self, target_user, state, ignore_status_msg=False): + """Set the presence state of the user.""" + presence = state["presence"] + + valid_presence = ( + PresenceState.ONLINE, + PresenceState.UNAVAILABLE, + PresenceState.OFFLINE, + PresenceState.BUSY, + ) + + if presence not in valid_presence or ( + presence == PresenceState.BUSY and not self._busy_presence_enabled + ): + raise SynapseError(400, "Invalid presence state") + + user_id = target_user.to_string() + + # If presence is disabled, no-op + if not self.hs.config.use_presence: + return + + # Proxy request to master + await self._set_state_client( + user_id=user_id, state=state, ignore_status_msg=ignore_status_msg + ) + + async def bump_presence_active_time(self, user): + """We've seen the user do something that indicates they're interacting + with the app. + """ + # If presence is disabled, no-op + if not self.hs.config.use_presence: + return + + # Proxy request to master + user_id = user.to_string() + await self._bump_active_client(user_id=user_id) + class PresenceHandler(BasePresenceHandler): def __init__(self, hs: "HomeServer"): diff --git a/synapse/replication/tcp/client.py b/synapse/replication/tcp/client.py index ced69ee904..ce5d651cb8 100644 --- a/synapse/replication/tcp/client.py +++ b/synapse/replication/tcp/client.py @@ -14,22 +14,36 @@ """A replication client for use by synapse workers. """ import logging -from typing import TYPE_CHECKING, Dict, List, Tuple +from typing import TYPE_CHECKING, Dict, List, Optional, Set, Tuple from twisted.internet.defer import Deferred from twisted.internet.protocol import ReconnectingClientFactory from synapse.api.constants import EventTypes +from synapse.federation import send_queue +from synapse.federation.sender import FederationSender from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable +from synapse.metrics.background_process_metrics import run_as_background_process from synapse.replication.tcp.protocol import ClientReplicationStreamProtocol -from synapse.replication.tcp.streams import TypingStream +from synapse.replication.tcp.streams import ( + AccountDataStream, + DeviceListsStream, + GroupServerStream, + PresenceStream, + PushersStream, + PushRulesStream, + ReceiptsStream, + TagAccountDataStream, + ToDeviceStream, + TypingStream, +) from synapse.replication.tcp.streams.events import ( EventsStream, EventsStreamEventRow, EventsStreamRow, ) -from synapse.types import PersistedEventPosition, UserID -from synapse.util.async_helpers import timeout_deferred +from synapse.types import PersistedEventPosition, ReadReceipt, UserID +from synapse.util.async_helpers import Linearizer, timeout_deferred from synapse.util.metrics import Measure if TYPE_CHECKING: @@ -105,6 +119,14 @@ class ReplicationDataHandler: self._instance_name = hs.get_instance_name() self._typing_handler = hs.get_typing_handler() + self._notify_pushers = hs.config.start_pushers + self._pusher_pool = hs.get_pusherpool() + self._presence_handler = hs.get_presence_handler() + + self.send_handler = None # type: Optional[FederationSenderHandler] + if hs.should_send_federation(): + self.send_handler = FederationSenderHandler(hs) + # Map from stream to list of deferreds waiting for the stream to # arrive at a particular position. The lists are sorted by stream position. self._streams_to_waiters = {} # type: Dict[str, List[Tuple[int, Deferred]]] @@ -125,13 +147,53 @@ class ReplicationDataHandler: """ self.store.process_replication_rows(stream_name, instance_name, token, rows) + if self.send_handler: + await self.send_handler.process_replication_rows(stream_name, token, rows) + if stream_name == TypingStream.NAME: self._typing_handler.process_replication_rows(token, rows) self.notifier.on_new_event( "typing_key", token, rooms=[row.room_id for row in rows] ) - - if stream_name == EventsStream.NAME: + elif stream_name == PushRulesStream.NAME: + self.notifier.on_new_event( + "push_rules_key", token, users=[row.user_id for row in rows] + ) + elif stream_name in (AccountDataStream.NAME, TagAccountDataStream.NAME): + self.notifier.on_new_event( + "account_data_key", token, users=[row.user_id for row in rows] + ) + elif stream_name == ReceiptsStream.NAME: + self.notifier.on_new_event( + "receipt_key", token, rooms=[row.room_id for row in rows] + ) + await self._pusher_pool.on_new_receipts( + token, token, {row.room_id for row in rows} + ) + elif stream_name == ToDeviceStream.NAME: + entities = [row.entity for row in rows if row.entity.startswith("@")] + if entities: + self.notifier.on_new_event("to_device_key", token, users=entities) + elif stream_name == DeviceListsStream.NAME: + all_room_ids = set() # type: Set[str] + for row in rows: + if row.entity.startswith("@"): + room_ids = await self.store.get_rooms_for_user(row.entity) + all_room_ids.update(room_ids) + self.notifier.on_new_event("device_list_key", token, rooms=all_room_ids) + elif stream_name == GroupServerStream.NAME: + self.notifier.on_new_event( + "groups_key", token, users=[row.user_id for row in rows] + ) + elif stream_name == PushersStream.NAME: + for row in rows: + if row.deleted: + self.stop_pusher(row.user_id, row.app_id, row.pushkey) + else: + await self.start_pusher(row.user_id, row.app_id, row.pushkey) + elif stream_name == PresenceStream.NAME: + await self._presence_handler.process_replication_rows(token, rows) + elif stream_name == EventsStream.NAME: # We shouldn't get multiple rows per token for events stream, so # we don't need to optimise this for multiple rows. for row in rows: @@ -190,7 +252,7 @@ class ReplicationDataHandler: waiting_list[:] = waiting_list[index_of_first_deferred_not_called:] async def on_position(self, stream_name: str, instance_name: str, token: int): - self.store.process_replication_rows(stream_name, instance_name, token, []) + await self.on_rdata(stream_name, instance_name, token, []) # We poke the generic "replication" notifier to wake anything up that # may be streaming. @@ -199,6 +261,11 @@ class ReplicationDataHandler: def on_remote_server_up(self, server: str): """Called when get a new REMOTE_SERVER_UP command.""" + # Let's wake up the transaction queue for the server in case we have + # pending stuff to send to it. + if self.send_handler: + self.send_handler.wake_destination(server) + async def wait_for_stream_position( self, instance_name: str, stream_name: str, position: int ): @@ -235,3 +302,153 @@ class ReplicationDataHandler: logger.info( "Finished waiting for repl stream %r to reach %s", stream_name, position ) + + def stop_pusher(self, user_id, app_id, pushkey): + if not self._notify_pushers: + return + + key = "%s:%s" % (app_id, pushkey) + pushers_for_user = self._pusher_pool.pushers.get(user_id, {}) + pusher = pushers_for_user.pop(key, None) + if pusher is None: + return + logger.info("Stopping pusher %r / %r", user_id, key) + pusher.on_stop() + + async def start_pusher(self, user_id, app_id, pushkey): + if not self._notify_pushers: + return + + key = "%s:%s" % (app_id, pushkey) + logger.info("Starting pusher %r / %r", user_id, key) + return await self._pusher_pool.start_pusher_by_id(app_id, pushkey, user_id) + + +class FederationSenderHandler: + """Processes the fedration replication stream + + This class is only instantiate on the worker responsible for sending outbound + federation transactions. It receives rows from the replication stream and forwards + the appropriate entries to the FederationSender class. + """ + + def __init__(self, hs: "HomeServer"): + assert hs.should_send_federation() + + self.store = hs.get_datastore() + self._is_mine_id = hs.is_mine_id + self._hs = hs + + # We need to make a temporary value to ensure that mypy picks up the + # right type. We know we should have a federation sender instance since + # `should_send_federation` is True. + sender = hs.get_federation_sender() + assert isinstance(sender, FederationSender) + self.federation_sender = sender + + # Stores the latest position in the federation stream we've gotten up + # to. This is always set before we use it. + self.federation_position = None # type: Optional[int] + + self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer") + + def wake_destination(self, server: str): + self.federation_sender.wake_destination(server) + + async def process_replication_rows(self, stream_name, token, rows): + # The federation stream contains things that we want to send out, e.g. + # presence, typing, etc. + if stream_name == "federation": + send_queue.process_rows_for_federation(self.federation_sender, rows) + await self.update_token(token) + + # ... and when new receipts happen + elif stream_name == ReceiptsStream.NAME: + await self._on_new_receipts(rows) + + # ... as well as device updates and messages + elif stream_name == DeviceListsStream.NAME: + # The entities are either user IDs (starting with '@') whose devices + # have changed, or remote servers that we need to tell about + # changes. + hosts = {row.entity for row in rows if not row.entity.startswith("@")} + for host in hosts: + self.federation_sender.send_device_messages(host) + + elif stream_name == ToDeviceStream.NAME: + # The to_device stream includes stuff to be pushed to both local + # clients and remote servers, so we ignore entities that start with + # '@' (since they'll be local users rather than destinations). + hosts = {row.entity for row in rows if not row.entity.startswith("@")} + for host in hosts: + self.federation_sender.send_device_messages(host) + + async def _on_new_receipts(self, rows): + """ + Args: + rows (Iterable[synapse.replication.tcp.streams.ReceiptsStream.ReceiptsStreamRow]): + new receipts to be processed + """ + for receipt in rows: + # we only want to send on receipts for our own users + if not self._is_mine_id(receipt.user_id): + continue + receipt_info = ReadReceipt( + receipt.room_id, + receipt.receipt_type, + receipt.user_id, + [receipt.event_id], + receipt.data, + ) + await self.federation_sender.send_read_receipt(receipt_info) + + async def update_token(self, token): + """Update the record of where we have processed to in the federation stream. + + Called after we have processed a an update received over replication. Sends + a FEDERATION_ACK back to the master, and stores the token that we have processed + in `federation_stream_position` so that we can restart where we left off. + """ + self.federation_position = token + + # We save and send the ACK to master asynchronously, so we don't block + # processing on persistence. We don't need to do this operation for + # every single RDATA we receive, we just need to do it periodically. + + if self._fed_position_linearizer.is_queued(None): + # There is already a task queued up to save and send the token, so + # no need to queue up another task. + return + + run_as_background_process("_save_and_send_ack", self._save_and_send_ack) + + async def _save_and_send_ack(self): + """Save the current federation position in the database and send an ACK + to master with where we're up to. + """ + # We should only be calling this once we've got a token. + assert self.federation_position is not None + + try: + # We linearize here to ensure we don't have races updating the token + # + # XXX this appears to be redundant, since the ReplicationCommandHandler + # has a linearizer which ensures that we only process one line of + # replication data at a time. Should we remove it, or is it doing useful + # service for robustness? Or could we replace it with an assertion that + # we're not being re-entered? + + with (await self._fed_position_linearizer.queue(None)): + # We persist and ack the same position, so we take a copy of it + # here as otherwise it can get modified from underneath us. + current_position = self.federation_position + + await self.store.update_federation_out_pos( + "federation", current_position + ) + + # We ACK this token over replication so that the master can drop + # its in memory queues + self._hs.get_tcp_replication().send_federation_ack(current_position) + except Exception: + logger.exception("Error updating federation stream position") diff --git a/synapse/server.py b/synapse/server.py index 6c35ae6e50..95a2cd2e5d 100644 --- a/synapse/server.py +++ b/synapse/server.py @@ -85,7 +85,11 @@ from synapse.handlers.initial_sync import InitialSyncHandler from synapse.handlers.message import EventCreationHandler, MessageHandler from synapse.handlers.pagination import PaginationHandler from synapse.handlers.password_policy import PasswordPolicyHandler -from synapse.handlers.presence import PresenceHandler +from synapse.handlers.presence import ( + BasePresenceHandler, + PresenceHandler, + WorkerPresenceHandler, +) from synapse.handlers.profile import ProfileHandler from synapse.handlers.read_marker import ReadMarkerHandler from synapse.handlers.receipts import ReceiptsHandler @@ -415,8 +419,11 @@ class HomeServer(metaclass=abc.ABCMeta): return StateResolutionHandler(self) @cache_in_self - def get_presence_handler(self) -> PresenceHandler: - return PresenceHandler(self) + def get_presence_handler(self) -> BasePresenceHandler: + if self.config.worker_app: + return WorkerPresenceHandler(self) + else: + return PresenceHandler(self) @cache_in_self def get_typing_writer_handler(self) -> TypingWriterHandler: diff --git a/tests/replication/_base.py b/tests/replication/_base.py index 36138d69aa..c9d04aef29 100644 --- a/tests/replication/_base.py +++ b/tests/replication/_base.py @@ -21,13 +21,11 @@ from twisted.web.http import HTTPChannel from twisted.web.resource import Resource from twisted.web.server import Request, Site -from synapse.app.generic_worker import ( - GenericWorkerReplicationHandler, - GenericWorkerServer, -) +from synapse.app.generic_worker import GenericWorkerServer from synapse.http.server import JsonResource from synapse.http.site import SynapseRequest, SynapseSite from synapse.replication.http import ReplicationRestResource +from synapse.replication.tcp.client import ReplicationDataHandler from synapse.replication.tcp.handler import ReplicationCommandHandler from synapse.replication.tcp.protocol import ClientReplicationStreamProtocol from synapse.replication.tcp.resource import ( @@ -431,7 +429,7 @@ class BaseMultiWorkerStreamTestCase(unittest.HomeserverTestCase): server_protocol.makeConnection(server_to_client_transport) -class TestReplicationDataHandler(GenericWorkerReplicationHandler): +class TestReplicationDataHandler(ReplicationDataHandler): """Drop-in for ReplicationDataHandler which just collects RDATA rows""" def __init__(self, hs: HomeServer): -- cgit 1.5.1 From 05e8c70c059f8ebb066e029bc3aa3e0cefef1019 Mon Sep 17 00:00:00 2001 From: Jonathan de Jong Date: Wed, 14 Apr 2021 18:19:02 +0200 Subject: Experimental Federation Speedup (#9702) This basically speeds up federation by "squeezing" each individual dual database call (to destinations and destination_rooms), which previously happened per every event, into one call for an entire batch (100 max). Signed-off-by: Jonathan de Jong --- changelog.d/9702.misc | 1 + contrib/experiments/test_messaging.py | 42 ++++--- synapse/federation/sender/__init__.py | 140 ++++++++++++--------- synapse/federation/sender/per_destination_queue.py | 15 ++- synapse/storage/databases/main/transactions.py | 28 ++--- 5 files changed, 129 insertions(+), 97 deletions(-) create mode 100644 changelog.d/9702.misc diff --git a/changelog.d/9702.misc b/changelog.d/9702.misc new file mode 100644 index 0000000000..c6e63450a9 --- /dev/null +++ b/changelog.d/9702.misc @@ -0,0 +1 @@ +Speed up federation transmission by using fewer database calls. Contributed by @ShadowJonathan. diff --git a/contrib/experiments/test_messaging.py b/contrib/experiments/test_messaging.py index 31b8a68225..5dd172052b 100644 --- a/contrib/experiments/test_messaging.py +++ b/contrib/experiments/test_messaging.py @@ -224,14 +224,16 @@ class HomeServer(ReplicationHandler): destinations = yield self.get_servers_for_context(room_name) try: - yield self.replication_layer.send_pdu( - Pdu.create_new( - context=room_name, - pdu_type="sy.room.message", - content={"sender": sender, "body": body}, - origin=self.server_name, - destinations=destinations, - ) + yield self.replication_layer.send_pdus( + [ + Pdu.create_new( + context=room_name, + pdu_type="sy.room.message", + content={"sender": sender, "body": body}, + origin=self.server_name, + destinations=destinations, + ) + ] ) except Exception as e: logger.exception(e) @@ -253,7 +255,7 @@ class HomeServer(ReplicationHandler): origin=self.server_name, destinations=destinations, ) - yield self.replication_layer.send_pdu(pdu) + yield self.replication_layer.send_pdus([pdu]) except Exception as e: logger.exception(e) @@ -265,16 +267,18 @@ class HomeServer(ReplicationHandler): destinations = yield self.get_servers_for_context(room_name) try: - yield self.replication_layer.send_pdu( - Pdu.create_new( - context=room_name, - is_state=True, - pdu_type="sy.room.member", - state_key=invitee, - content={"membership": "invite"}, - origin=self.server_name, - destinations=destinations, - ) + yield self.replication_layer.send_pdus( + [ + Pdu.create_new( + context=room_name, + is_state=True, + pdu_type="sy.room.member", + state_key=invitee, + content={"membership": "invite"}, + origin=self.server_name, + destinations=destinations, + ) + ] ) except Exception as e: logger.exception(e) diff --git a/synapse/federation/sender/__init__.py b/synapse/federation/sender/__init__.py index 155161685d..952ad39f8c 100644 --- a/synapse/federation/sender/__init__.py +++ b/synapse/federation/sender/__init__.py @@ -18,8 +18,6 @@ from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Set, from prometheus_client import Counter -from twisted.internet import defer - import synapse.metrics from synapse.api.presence import UserPresenceState from synapse.events import EventBase @@ -27,11 +25,7 @@ from synapse.federation.sender.per_destination_queue import PerDestinationQueue from synapse.federation.sender.transaction_manager import TransactionManager from synapse.federation.units import Edu from synapse.handlers.presence import get_interested_remotes -from synapse.logging.context import ( - make_deferred_yieldable, - preserve_fn, - run_in_background, -) +from synapse.logging.context import preserve_fn from synapse.metrics import ( LaterGauge, event_processing_loop_counter, @@ -39,7 +33,7 @@ from synapse.metrics import ( events_processed_counter, ) from synapse.metrics.background_process_metrics import run_as_background_process -from synapse.types import JsonDict, ReadReceipt, RoomStreamToken +from synapse.types import Collection, JsonDict, ReadReceipt, RoomStreamToken from synapse.util.metrics import Measure, measure_func if TYPE_CHECKING: @@ -276,15 +270,27 @@ class FederationSender(AbstractFederationSender): if not events and next_token >= self._last_poked_id: break - async def handle_event(event: EventBase) -> None: + async def get_destinations_for_event( + event: EventBase, + ) -> Collection[str]: + """Computes the destinations to which this event must be sent. + + This returns an empty tuple when there are no destinations to send to, + or if this event is not from this homeserver and it is not sending + it on behalf of another server. + + Will also filter out destinations which this sender is not responsible for, + if multiple federation senders exist. + """ + # Only send events for this server. send_on_behalf_of = event.internal_metadata.get_send_on_behalf_of() is_mine = self.is_mine_id(event.sender) if not is_mine and send_on_behalf_of is None: - return + return () if not event.internal_metadata.should_proactively_send(): - return + return () destinations = None # type: Optional[Set[str]] if not event.prev_event_ids(): @@ -319,7 +325,7 @@ class FederationSender(AbstractFederationSender): "Failed to calculate hosts in room for event: %s", event.event_id, ) - return + return () destinations = { d @@ -329,17 +335,15 @@ class FederationSender(AbstractFederationSender): ) } + destinations.discard(self.server_name) + if send_on_behalf_of is not None: # If we are sending the event on behalf of another server # then it already has the event and there is no reason to # send the event to it. destinations.discard(send_on_behalf_of) - logger.debug("Sending %s to %r", event, destinations) - if destinations: - await self._send_pdu(event, destinations) - now = self.clock.time_msec() ts = await self.store.get_received_ts(event.event_id) @@ -347,24 +351,29 @@ class FederationSender(AbstractFederationSender): "federation_sender" ).observe((now - ts) / 1000) - async def handle_room_events(events: Iterable[EventBase]) -> None: - with Measure(self.clock, "handle_room_events"): - for event in events: - await handle_event(event) - - events_by_room = {} # type: Dict[str, List[EventBase]] - for event in events: - events_by_room.setdefault(event.room_id, []).append(event) - - await make_deferred_yieldable( - defer.gatherResults( - [ - run_in_background(handle_room_events, evs) - for evs in events_by_room.values() - ], - consumeErrors=True, - ) - ) + return destinations + return () + + async def get_federatable_events_and_destinations( + events: Iterable[EventBase], + ) -> List[Tuple[EventBase, Collection[str]]]: + with Measure(self.clock, "get_destinations_for_events"): + # Fetch federation destinations per event, + # skip if get_destinations_for_event returns an empty collection, + # return list of event->destinations pairs. + return [ + (event, dests) + for (event, dests) in [ + (event, await get_destinations_for_event(event)) + for event in events + ] + if dests + ] + + events_and_dests = await get_federatable_events_and_destinations(events) + + # Send corresponding events to each destination queue + await self._distribute_events(events_and_dests) await self.store.update_federation_out_pos("events", next_token) @@ -382,7 +391,7 @@ class FederationSender(AbstractFederationSender): events_processed_counter.inc(len(events)) event_processing_loop_room_count.labels("federation_sender").inc( - len(events_by_room) + len({event.room_id for event in events}) ) event_processing_loop_counter.labels("federation_sender").inc() @@ -394,34 +403,53 @@ class FederationSender(AbstractFederationSender): finally: self._is_processing = False - async def _send_pdu(self, pdu: EventBase, destinations: Iterable[str]) -> None: - # We loop through all destinations to see whether we already have - # a transaction in progress. If we do, stick it in the pending_pdus - # table and we'll get back to it later. + async def _distribute_events( + self, + events_and_dests: Iterable[Tuple[EventBase, Collection[str]]], + ) -> None: + """Distribute events to the respective per_destination queues. - destinations = set(destinations) - destinations.discard(self.server_name) - logger.debug("Sending to: %s", str(destinations)) + Also persists last-seen per-room stream_ordering to 'destination_rooms'. - if not destinations: - return + Args: + events_and_dests: A list of tuples, which are (event: EventBase, destinations: Collection[str]). + Every event is paired with its intended destinations (in federation). + """ + # Tuples of room_id + destination to their max-seen stream_ordering + room_with_dest_stream_ordering = {} # type: Dict[Tuple[str, str], int] - sent_pdus_destination_dist_total.inc(len(destinations)) - sent_pdus_destination_dist_count.inc() + # List of events to send to each destination + events_by_dest = {} # type: Dict[str, List[EventBase]] - assert pdu.internal_metadata.stream_ordering + # For each event-destinations pair... + for event, destinations in events_and_dests: - # track the fact that we have a PDU for these destinations, - # to allow us to perform catch-up later on if the remote is unreachable - # for a while. - await self.store.store_destination_rooms_entries( - destinations, - pdu.room_id, - pdu.internal_metadata.stream_ordering, + # (we got this from the database, it's filled) + assert event.internal_metadata.stream_ordering + + sent_pdus_destination_dist_total.inc(len(destinations)) + sent_pdus_destination_dist_count.inc() + + # ...iterate over those destinations.. + for destination in destinations: + # ...update their stream-ordering... + room_with_dest_stream_ordering[(event.room_id, destination)] = max( + event.internal_metadata.stream_ordering, + room_with_dest_stream_ordering.get((event.room_id, destination), 0), + ) + + # ...and add the event to each destination queue. + events_by_dest.setdefault(destination, []).append(event) + + # Bulk-store destination_rooms stream_ids + await self.store.bulk_store_destination_rooms_entries( + room_with_dest_stream_ordering ) - for destination in destinations: - self._get_per_destination_queue(destination).send_pdu(pdu) + for destination, pdus in events_by_dest.items(): + logger.debug("Sending %d pdus to %s", len(pdus), destination) + + self._get_per_destination_queue(destination).send_pdus(pdus) async def send_read_receipt(self, receipt: ReadReceipt) -> None: """Send a RR to any other servers in the room diff --git a/synapse/federation/sender/per_destination_queue.py b/synapse/federation/sender/per_destination_queue.py index 3b053ebcfb..3bb66bce32 100644 --- a/synapse/federation/sender/per_destination_queue.py +++ b/synapse/federation/sender/per_destination_queue.py @@ -154,19 +154,22 @@ class PerDestinationQueue: + len(self._pending_edus_keyed) ) - def send_pdu(self, pdu: EventBase) -> None: - """Add a PDU to the queue, and start the transmission loop if necessary + def send_pdus(self, pdus: Iterable[EventBase]) -> None: + """Add PDUs to the queue, and start the transmission loop if necessary Args: - pdu: pdu to send + pdus: pdus to send """ if not self._catching_up or self._last_successful_stream_ordering is None: # only enqueue the PDU if we are not catching up (False) or do not # yet know if we have anything to catch up (None) - self._pending_pdus.append(pdu) + self._pending_pdus.extend(pdus) else: - assert pdu.internal_metadata.stream_ordering - self._catchup_last_skipped = pdu.internal_metadata.stream_ordering + self._catchup_last_skipped = max( + pdu.internal_metadata.stream_ordering + for pdu in pdus + if pdu.internal_metadata.stream_ordering is not None + ) self.attempt_new_transaction() diff --git a/synapse/storage/databases/main/transactions.py b/synapse/storage/databases/main/transactions.py index 82335e7a9d..b28ca61f80 100644 --- a/synapse/storage/databases/main/transactions.py +++ b/synapse/storage/databases/main/transactions.py @@ -14,7 +14,7 @@ import logging from collections import namedtuple -from typing import Iterable, List, Optional, Tuple +from typing import Dict, List, Optional, Tuple from canonicaljson import encode_canonical_json @@ -295,37 +295,33 @@ class TransactionStore(TransactionWorkerStore): }, ) - async def store_destination_rooms_entries( - self, - destinations: Iterable[str], - room_id: str, - stream_ordering: int, - ) -> None: + async def bulk_store_destination_rooms_entries( + self, room_and_destination_to_ordering: Dict[Tuple[str, str], int] + ): """ - Updates or creates `destination_rooms` entries in batch for a single event. + Updates or creates `destination_rooms` entries for a number of events. Args: - destinations: list of destinations - room_id: the room_id of the event - stream_ordering: the stream_ordering of the event + room_and_destination_to_ordering: A mapping of (room, destination) -> stream_id """ await self.db_pool.simple_upsert_many( table="destinations", key_names=("destination",), - key_values=[(d,) for d in destinations], + key_values={(d,) for _, d in room_and_destination_to_ordering.keys()}, value_names=[], value_values=[], desc="store_destination_rooms_entries_dests", ) - rows = [(destination, room_id) for destination in destinations] await self.db_pool.simple_upsert_many( table="destination_rooms", - key_names=("destination", "room_id"), - key_values=rows, + key_names=("room_id", "destination"), + key_values=list(room_and_destination_to_ordering.keys()), value_names=["stream_ordering"], - value_values=[(stream_ordering,)] * len(rows), + value_values=[ + (stream_id,) for stream_id in room_and_destination_to_ordering.values() + ], desc="store_destination_rooms_entries_rooms", ) -- cgit 1.5.1 From cc51aaaa7adb0ec2235e027b5184ebda9b660ec4 Mon Sep 17 00:00:00 2001 From: Patrick Cloke Date: Wed, 14 Apr 2021 12:32:20 -0400 Subject: Check for space membership during a remote join of a restricted room. (#9763) When receiving a /send_join request for a room with join rules set to 'restricted', check if the user is a member of the spaces defined in the 'allow' key of the join rules. This only applies to an experimental room version, as defined in MSC3083. --- changelog.d/9763.feature | 1 + changelog.d/9800.feature | 1 + synapse/handlers/event_auth.py | 82 ++++++++++++++++ synapse/handlers/federation.py | 212 +++++++++++++++++++++++++++------------- synapse/handlers/room_member.py | 62 +----------- synapse/server.py | 5 + tests/test_federation.py | 6 +- 7 files changed, 238 insertions(+), 131 deletions(-) create mode 100644 changelog.d/9763.feature create mode 100644 changelog.d/9800.feature create mode 100644 synapse/handlers/event_auth.py diff --git a/changelog.d/9763.feature b/changelog.d/9763.feature new file mode 100644 index 0000000000..9404ad2fc0 --- /dev/null +++ b/changelog.d/9763.feature @@ -0,0 +1 @@ +Update experimental support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083): restricting room access via group membership. diff --git a/changelog.d/9800.feature b/changelog.d/9800.feature new file mode 100644 index 0000000000..9404ad2fc0 --- /dev/null +++ b/changelog.d/9800.feature @@ -0,0 +1 @@ +Update experimental support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083): restricting room access via group membership. diff --git a/synapse/handlers/event_auth.py b/synapse/handlers/event_auth.py new file mode 100644 index 0000000000..06da1a93d9 --- /dev/null +++ b/synapse/handlers/event_auth.py @@ -0,0 +1,82 @@ +# Copyright 2021 The Matrix.org Foundation C.I.C. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from typing import TYPE_CHECKING + +from synapse.api.constants import EventTypes, JoinRules +from synapse.api.room_versions import RoomVersion +from synapse.types import StateMap + +if TYPE_CHECKING: + from synapse.server import HomeServer + + +class EventAuthHandler: + def __init__(self, hs: "HomeServer"): + self._store = hs.get_datastore() + + async def can_join_without_invite( + self, state_ids: StateMap[str], room_version: RoomVersion, user_id: str + ) -> bool: + """ + Check whether a user can join a room without an invite. + + When joining a room with restricted joined rules (as defined in MSC3083), + the membership of spaces must be checked during join. + + Args: + state_ids: The state of the room as it currently is. + room_version: The room version of the room being joined. + user_id: The user joining the room. + + Returns: + True if the user can join the room, false otherwise. + """ + # This only applies to room versions which support the new join rule. + if not room_version.msc3083_join_rules: + return True + + # If there's no join rule, then it defaults to public (so this doesn't apply). + join_rules_event_id = state_ids.get((EventTypes.JoinRules, ""), None) + if not join_rules_event_id: + return True + + # If the join rule is not restricted, this doesn't apply. + join_rules_event = await self._store.get_event(join_rules_event_id) + if join_rules_event.content.get("join_rule") != JoinRules.MSC3083_RESTRICTED: + return True + + # If allowed is of the wrong form, then only allow invited users. + allowed_spaces = join_rules_event.content.get("allow", []) + if not isinstance(allowed_spaces, list): + return False + + # Get the list of joined rooms and see if there's an overlap. + joined_rooms = await self._store.get_rooms_for_user(user_id) + + # Pull out the other room IDs, invalid data gets filtered. + for space in allowed_spaces: + if not isinstance(space, dict): + continue + + space_id = space.get("space") + if not isinstance(space_id, str): + continue + + # The user was joined to one of the spaces specified, they can join + # this room! + if space_id in joined_rooms: + return True + + # The user was not in any of the required spaces. + return False diff --git a/synapse/handlers/federation.py b/synapse/handlers/federation.py index fe1d83f6b8..0c9bdf51a4 100644 --- a/synapse/handlers/federation.py +++ b/synapse/handlers/federation.py @@ -103,7 +103,7 @@ logger = logging.getLogger(__name__) @attr.s(slots=True) class _NewEventInfo: - """Holds information about a received event, ready for passing to _handle_new_events + """Holds information about a received event, ready for passing to _auth_and_persist_events Attributes: event: the received event @@ -146,6 +146,7 @@ class FederationHandler(BaseHandler): self.is_mine_id = hs.is_mine_id self.spam_checker = hs.get_spam_checker() self.event_creation_handler = hs.get_event_creation_handler() + self.event_auth_handler = hs.get_event_auth_handler() self._message_handler = hs.get_message_handler() self._server_notices_mxid = hs.config.server_notices_mxid self.config = hs.config @@ -807,7 +808,10 @@ class FederationHandler(BaseHandler): logger.debug("Processing event: %s", event) try: - await self._handle_new_event(origin, event, state=state) + context = await self.state_handler.compute_event_context( + event, old_state=state + ) + await self._auth_and_persist_event(origin, event, context, state=state) except AuthError as e: raise FederationError("ERROR", e.code, e.msg, affected=event.event_id) @@ -1010,7 +1014,9 @@ class FederationHandler(BaseHandler): ) if ev_infos: - await self._handle_new_events(dest, room_id, ev_infos, backfilled=True) + await self._auth_and_persist_events( + dest, room_id, ev_infos, backfilled=True + ) # Step 2: Persist the rest of the events in the chunk one by one events.sort(key=lambda e: e.depth) @@ -1023,10 +1029,12 @@ class FederationHandler(BaseHandler): # non-outliers assert not event.internal_metadata.is_outlier() + context = await self.state_handler.compute_event_context(event) + # We store these one at a time since each event depends on the # previous to work out the state. # TODO: We can probably do something more clever here. - await self._handle_new_event(dest, event, backfilled=True) + await self._auth_and_persist_event(dest, event, context, backfilled=True) return events @@ -1360,7 +1368,7 @@ class FederationHandler(BaseHandler): event_infos.append(_NewEventInfo(event, None, auth)) - await self._handle_new_events( + await self._auth_and_persist_events( destination, room_id, event_infos, @@ -1666,16 +1674,47 @@ class FederationHandler(BaseHandler): # would introduce the danger of backwards-compatibility problems. event.internal_metadata.send_on_behalf_of = origin - context = await self._handle_new_event(origin, event) + # Calculate the event context. + context = await self.state_handler.compute_event_context(event) + + # Get the state before the new event. + prev_state_ids = await context.get_prev_state_ids() + + # Check if the user is already in the room or invited to the room. + user_id = event.state_key + prev_member_event_id = prev_state_ids.get((EventTypes.Member, user_id), None) + newly_joined = True + is_invite = False + if prev_member_event_id: + prev_member_event = await self.store.get_event(prev_member_event_id) + newly_joined = prev_member_event.membership != Membership.JOIN + is_invite = prev_member_event.membership == Membership.INVITE + + # If the member is not already in the room, and not invited, check if + # they should be allowed access via membership in a space. + if ( + newly_joined + and not is_invite + and not await self.event_auth_handler.can_join_without_invite( + prev_state_ids, + event.room_version, + user_id, + ) + ): + raise SynapseError( + 400, + "You do not belong to any of the required spaces to join this room.", + ) + + # Persist the event. + await self._auth_and_persist_event(origin, event, context) logger.debug( - "on_send_join_request: After _handle_new_event: %s, sigs: %s", + "on_send_join_request: After _auth_and_persist_event: %s, sigs: %s", event.event_id, event.signatures, ) - prev_state_ids = await context.get_prev_state_ids() - state_ids = list(prev_state_ids.values()) auth_chain = await self.store.get_auth_chain(event.room_id, state_ids) @@ -1878,10 +1917,11 @@ class FederationHandler(BaseHandler): event.internal_metadata.outlier = False - await self._handle_new_event(origin, event) + context = await self.state_handler.compute_event_context(event) + await self._auth_and_persist_event(origin, event, context) logger.debug( - "on_send_leave_request: After _handle_new_event: %s, sigs: %s", + "on_send_leave_request: After _auth_and_persist_event: %s, sigs: %s", event.event_id, event.signatures, ) @@ -1989,16 +2029,44 @@ class FederationHandler(BaseHandler): async def get_min_depth_for_context(self, context: str) -> int: return await self.store.get_min_depth(context) - async def _handle_new_event( + async def _auth_and_persist_event( self, origin: str, event: EventBase, + context: EventContext, state: Optional[Iterable[EventBase]] = None, auth_events: Optional[MutableStateMap[EventBase]] = None, backfilled: bool = False, - ) -> EventContext: - context = await self._prep_event( - origin, event, state=state, auth_events=auth_events, backfilled=backfilled + ) -> None: + """ + Process an event by performing auth checks and then persisting to the database. + + Args: + origin: The host the event originates from. + event: The event itself. + context: + The event context. + + NB that this function potentially modifies it. + state: + The state events used to auth the event. If this is not provided + the current state events will be used. + auth_events: + Map from (event_type, state_key) to event + + Normally, our calculated auth_events based on the state of the room + at the event's position in the DAG, though occasionally (eg if the + event is an outlier), may be the auth events claimed by the remote + server. + backfilled: True if the event was backfilled. + """ + context = await self._check_event_auth( + origin, + event, + context, + state=state, + auth_events=auth_events, + backfilled=backfilled, ) try: @@ -2020,9 +2088,7 @@ class FederationHandler(BaseHandler): ) raise - return context - - async def _handle_new_events( + async def _auth_and_persist_events( self, origin: str, room_id: str, @@ -2040,9 +2106,13 @@ class FederationHandler(BaseHandler): async def prep(ev_info: _NewEventInfo): event = ev_info.event with nested_logging_context(suffix=event.event_id): - res = await self._prep_event( + res = await self.state_handler.compute_event_context( + event, old_state=ev_info.state + ) + res = await self._check_event_auth( origin, event, + res, state=ev_info.state, auth_events=ev_info.auth_events, backfilled=backfilled, @@ -2177,49 +2247,6 @@ class FederationHandler(BaseHandler): room_id, [(event, new_event_context)] ) - async def _prep_event( - self, - origin: str, - event: EventBase, - state: Optional[Iterable[EventBase]], - auth_events: Optional[MutableStateMap[EventBase]], - backfilled: bool, - ) -> EventContext: - context = await self.state_handler.compute_event_context(event, old_state=state) - - if not auth_events: - prev_state_ids = await context.get_prev_state_ids() - auth_events_ids = self.auth.compute_auth_events( - event, prev_state_ids, for_verification=True - ) - auth_events_x = await self.store.get_events(auth_events_ids) - auth_events = {(e.type, e.state_key): e for e in auth_events_x.values()} - - # This is a hack to fix some old rooms where the initial join event - # didn't reference the create event in its auth events. - if event.type == EventTypes.Member and not event.auth_event_ids(): - if len(event.prev_event_ids()) == 1 and event.depth < 5: - c = await self.store.get_event( - event.prev_event_ids()[0], allow_none=True - ) - if c and c.type == EventTypes.Create: - auth_events[(c.type, c.state_key)] = c - - context = await self.do_auth(origin, event, context, auth_events=auth_events) - - if not context.rejected: - await self._check_for_soft_fail(event, state, backfilled) - - if event.type == EventTypes.GuestAccess and not context.rejected: - await self.maybe_kick_guest_users(event) - - # If we are going to send this event over federation we precaclculate - # the joined hosts. - if event.internal_metadata.get_send_on_behalf_of(): - await self.event_creation_handler.cache_joined_hosts_for_event(event) - - return context - async def _check_for_soft_fail( self, event: EventBase, state: Optional[Iterable[EventBase]], backfilled: bool ) -> None: @@ -2330,19 +2357,28 @@ class FederationHandler(BaseHandler): return missing_events - async def do_auth( + async def _check_event_auth( self, origin: str, event: EventBase, context: EventContext, - auth_events: MutableStateMap[EventBase], + state: Optional[Iterable[EventBase]], + auth_events: Optional[MutableStateMap[EventBase]], + backfilled: bool, ) -> EventContext: """ + Checks whether an event should be rejected (for failing auth checks). Args: - origin: - event: + origin: The host the event originates from. + event: The event itself. context: + The event context. + + NB that this function potentially modifies it. + state: + The state events used to auth the event. If this is not provided + the current state events will be used. auth_events: Map from (event_type, state_key) to event @@ -2352,12 +2388,32 @@ class FederationHandler(BaseHandler): server. Also NB that this function adds entries to it. + backfilled: True if the event was backfilled. + Returns: - updated context object + The updated context object. """ room_version = await self.store.get_room_version_id(event.room_id) room_version_obj = KNOWN_ROOM_VERSIONS[room_version] + if not auth_events: + prev_state_ids = await context.get_prev_state_ids() + auth_events_ids = self.auth.compute_auth_events( + event, prev_state_ids, for_verification=True + ) + auth_events_x = await self.store.get_events(auth_events_ids) + auth_events = {(e.type, e.state_key): e for e in auth_events_x.values()} + + # This is a hack to fix some old rooms where the initial join event + # didn't reference the create event in its auth events. + if event.type == EventTypes.Member and not event.auth_event_ids(): + if len(event.prev_event_ids()) == 1 and event.depth < 5: + c = await self.store.get_event( + event.prev_event_ids()[0], allow_none=True + ) + if c and c.type == EventTypes.Create: + auth_events[(c.type, c.state_key)] = c + try: context = await self._update_auth_events_and_context_for_auth( origin, event, context, auth_events @@ -2379,6 +2435,17 @@ class FederationHandler(BaseHandler): logger.warning("Failed auth resolution for %r because %s", event, e) context.rejected = RejectedReason.AUTH_ERROR + if not context.rejected: + await self._check_for_soft_fail(event, state, backfilled) + + if event.type == EventTypes.GuestAccess and not context.rejected: + await self.maybe_kick_guest_users(event) + + # If we are going to send this event over federation we precaclculate + # the joined hosts. + if event.internal_metadata.get_send_on_behalf_of(): + await self.event_creation_handler.cache_joined_hosts_for_event(event) + return context async def _update_auth_events_and_context_for_auth( @@ -2388,7 +2455,7 @@ class FederationHandler(BaseHandler): context: EventContext, auth_events: MutableStateMap[EventBase], ) -> EventContext: - """Helper for do_auth. See there for docs. + """Helper for _check_event_auth. See there for docs. Checks whether a given event has the expected auth events. If it doesn't then we talk to the remote server to compare state to see if @@ -2468,9 +2535,14 @@ class FederationHandler(BaseHandler): e.internal_metadata.outlier = True logger.debug( - "do_auth %s missing_auth: %s", event.event_id, e.event_id + "_check_event_auth %s missing_auth: %s", + event.event_id, + e.event_id, + ) + context = await self.state_handler.compute_event_context(e) + await self._auth_and_persist_event( + origin, e, context, auth_events=auth ) - await self._handle_new_event(origin, e, auth_events=auth) if e.event_id in event_auth_events: auth_events[(e.type, e.state_key)] = e diff --git a/synapse/handlers/room_member.py b/synapse/handlers/room_member.py index 2bbfac6471..2c5bada1d8 100644 --- a/synapse/handlers/room_member.py +++ b/synapse/handlers/room_member.py @@ -19,7 +19,7 @@ from http import HTTPStatus from typing import TYPE_CHECKING, Iterable, List, Optional, Tuple from synapse import types -from synapse.api.constants import AccountDataTypes, EventTypes, JoinRules, Membership +from synapse.api.constants import AccountDataTypes, EventTypes, Membership from synapse.api.errors import ( AuthError, Codes, @@ -28,7 +28,6 @@ from synapse.api.errors import ( SynapseError, ) from synapse.api.ratelimiting import Ratelimiter -from synapse.api.room_versions import RoomVersion from synapse.events import EventBase from synapse.events.snapshot import EventContext from synapse.types import JsonDict, Requester, RoomAlias, RoomID, StateMap, UserID @@ -64,6 +63,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta): self.profile_handler = hs.get_profile_handler() self.event_creation_handler = hs.get_event_creation_handler() self.account_data_handler = hs.get_account_data_handler() + self.event_auth_handler = hs.get_event_auth_handler() self.member_linearizer = Linearizer(name="member") @@ -178,62 +178,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta): await self._invites_per_user_limiter.ratelimit(requester, invitee_user_id) - async def _can_join_without_invite( - self, state_ids: StateMap[str], room_version: RoomVersion, user_id: str - ) -> bool: - """ - Check whether a user can join a room without an invite. - - When joining a room with restricted joined rules (as defined in MSC3083), - the membership of spaces must be checked during join. - - Args: - state_ids: The state of the room as it currently is. - room_version: The room version of the room being joined. - user_id: The user joining the room. - - Returns: - True if the user can join the room, false otherwise. - """ - # This only applies to room versions which support the new join rule. - if not room_version.msc3083_join_rules: - return True - - # If there's no join rule, then it defaults to public (so this doesn't apply). - join_rules_event_id = state_ids.get((EventTypes.JoinRules, ""), None) - if not join_rules_event_id: - return True - - # If the join rule is not restricted, this doesn't apply. - join_rules_event = await self.store.get_event(join_rules_event_id) - if join_rules_event.content.get("join_rule") != JoinRules.MSC3083_RESTRICTED: - return True - - # If allowed is of the wrong form, then only allow invited users. - allowed_spaces = join_rules_event.content.get("allow", []) - if not isinstance(allowed_spaces, list): - return False - - # Get the list of joined rooms and see if there's an overlap. - joined_rooms = await self.store.get_rooms_for_user(user_id) - - # Pull out the other room IDs, invalid data gets filtered. - for space in allowed_spaces: - if not isinstance(space, dict): - continue - - space_id = space.get("space") - if not isinstance(space_id, str): - continue - - # The user was joined to one of the spaces specified, they can join - # this room! - if space_id in joined_rooms: - return True - - # The user was not in any of the required spaces. - return False - async def _local_membership_update( self, requester: Requester, @@ -302,7 +246,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta): if ( newly_joined and not user_is_invited - and not await self._can_join_without_invite( + and not await self.event_auth_handler.can_join_without_invite( prev_state_ids, event.room_version, user_id ) ): diff --git a/synapse/server.py b/synapse/server.py index 95a2cd2e5d..045b8f3fca 100644 --- a/synapse/server.py +++ b/synapse/server.py @@ -77,6 +77,7 @@ from synapse.handlers.devicemessage import DeviceMessageHandler from synapse.handlers.directory import DirectoryHandler from synapse.handlers.e2e_keys import E2eKeysHandler from synapse.handlers.e2e_room_keys import E2eRoomKeysHandler +from synapse.handlers.event_auth import EventAuthHandler from synapse.handlers.events import EventHandler, EventStreamHandler from synapse.handlers.federation import FederationHandler from synapse.handlers.groups_local import GroupsLocalHandler, GroupsLocalWorkerHandler @@ -749,6 +750,10 @@ class HomeServer(metaclass=abc.ABCMeta): def get_space_summary_handler(self) -> SpaceSummaryHandler: return SpaceSummaryHandler(self) + @cache_in_self + def get_event_auth_handler(self) -> EventAuthHandler: + return EventAuthHandler(self) + @cache_in_self def get_external_cache(self) -> ExternalCache: return ExternalCache(self) diff --git a/tests/test_federation.py b/tests/test_federation.py index 86a44a13da..0a3a996ec1 100644 --- a/tests/test_federation.py +++ b/tests/test_federation.py @@ -75,8 +75,10 @@ class MessageAcceptTests(unittest.HomeserverTestCase): ) self.handler = self.homeserver.get_federation_handler() - self.handler.do_auth = lambda origin, event, context, auth_events: succeed( - context + self.handler._check_event_auth = ( + lambda origin, event, context, state, auth_events, backfilled: succeed( + context + ) ) self.client = self.homeserver.get_federation_client() self.client._check_sigs_and_hash_and_fetch = lambda dest, pdus, **k: succeed( -- cgit 1.5.1 From e8816c6aced208cdf1310393a212b5109892ffbf Mon Sep 17 00:00:00 2001 From: Patrick Cloke Date: Wed, 14 Apr 2021 12:33:37 -0400 Subject: Revert "Check for space membership during a remote join of a restricted room. (#9763)" This reverts commit cc51aaaa7adb0ec2235e027b5184ebda9b660ec4. The PR was prematurely merged and not yet approved. --- changelog.d/9763.feature | 1 - changelog.d/9800.feature | 1 - synapse/handlers/event_auth.py | 82 ---------------- synapse/handlers/federation.py | 212 +++++++++++++--------------------------- synapse/handlers/room_member.py | 62 +++++++++++- synapse/server.py | 5 - tests/test_federation.py | 6 +- 7 files changed, 131 insertions(+), 238 deletions(-) delete mode 100644 changelog.d/9763.feature delete mode 100644 changelog.d/9800.feature delete mode 100644 synapse/handlers/event_auth.py diff --git a/changelog.d/9763.feature b/changelog.d/9763.feature deleted file mode 100644 index 9404ad2fc0..0000000000 --- a/changelog.d/9763.feature +++ /dev/null @@ -1 +0,0 @@ -Update experimental support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083): restricting room access via group membership. diff --git a/changelog.d/9800.feature b/changelog.d/9800.feature deleted file mode 100644 index 9404ad2fc0..0000000000 --- a/changelog.d/9800.feature +++ /dev/null @@ -1 +0,0 @@ -Update experimental support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083): restricting room access via group membership. diff --git a/synapse/handlers/event_auth.py b/synapse/handlers/event_auth.py deleted file mode 100644 index 06da1a93d9..0000000000 --- a/synapse/handlers/event_auth.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright 2021 The Matrix.org Foundation C.I.C. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import TYPE_CHECKING - -from synapse.api.constants import EventTypes, JoinRules -from synapse.api.room_versions import RoomVersion -from synapse.types import StateMap - -if TYPE_CHECKING: - from synapse.server import HomeServer - - -class EventAuthHandler: - def __init__(self, hs: "HomeServer"): - self._store = hs.get_datastore() - - async def can_join_without_invite( - self, state_ids: StateMap[str], room_version: RoomVersion, user_id: str - ) -> bool: - """ - Check whether a user can join a room without an invite. - - When joining a room with restricted joined rules (as defined in MSC3083), - the membership of spaces must be checked during join. - - Args: - state_ids: The state of the room as it currently is. - room_version: The room version of the room being joined. - user_id: The user joining the room. - - Returns: - True if the user can join the room, false otherwise. - """ - # This only applies to room versions which support the new join rule. - if not room_version.msc3083_join_rules: - return True - - # If there's no join rule, then it defaults to public (so this doesn't apply). - join_rules_event_id = state_ids.get((EventTypes.JoinRules, ""), None) - if not join_rules_event_id: - return True - - # If the join rule is not restricted, this doesn't apply. - join_rules_event = await self._store.get_event(join_rules_event_id) - if join_rules_event.content.get("join_rule") != JoinRules.MSC3083_RESTRICTED: - return True - - # If allowed is of the wrong form, then only allow invited users. - allowed_spaces = join_rules_event.content.get("allow", []) - if not isinstance(allowed_spaces, list): - return False - - # Get the list of joined rooms and see if there's an overlap. - joined_rooms = await self._store.get_rooms_for_user(user_id) - - # Pull out the other room IDs, invalid data gets filtered. - for space in allowed_spaces: - if not isinstance(space, dict): - continue - - space_id = space.get("space") - if not isinstance(space_id, str): - continue - - # The user was joined to one of the spaces specified, they can join - # this room! - if space_id in joined_rooms: - return True - - # The user was not in any of the required spaces. - return False diff --git a/synapse/handlers/federation.py b/synapse/handlers/federation.py index 0c9bdf51a4..fe1d83f6b8 100644 --- a/synapse/handlers/federation.py +++ b/synapse/handlers/federation.py @@ -103,7 +103,7 @@ logger = logging.getLogger(__name__) @attr.s(slots=True) class _NewEventInfo: - """Holds information about a received event, ready for passing to _auth_and_persist_events + """Holds information about a received event, ready for passing to _handle_new_events Attributes: event: the received event @@ -146,7 +146,6 @@ class FederationHandler(BaseHandler): self.is_mine_id = hs.is_mine_id self.spam_checker = hs.get_spam_checker() self.event_creation_handler = hs.get_event_creation_handler() - self.event_auth_handler = hs.get_event_auth_handler() self._message_handler = hs.get_message_handler() self._server_notices_mxid = hs.config.server_notices_mxid self.config = hs.config @@ -808,10 +807,7 @@ class FederationHandler(BaseHandler): logger.debug("Processing event: %s", event) try: - context = await self.state_handler.compute_event_context( - event, old_state=state - ) - await self._auth_and_persist_event(origin, event, context, state=state) + await self._handle_new_event(origin, event, state=state) except AuthError as e: raise FederationError("ERROR", e.code, e.msg, affected=event.event_id) @@ -1014,9 +1010,7 @@ class FederationHandler(BaseHandler): ) if ev_infos: - await self._auth_and_persist_events( - dest, room_id, ev_infos, backfilled=True - ) + await self._handle_new_events(dest, room_id, ev_infos, backfilled=True) # Step 2: Persist the rest of the events in the chunk one by one events.sort(key=lambda e: e.depth) @@ -1029,12 +1023,10 @@ class FederationHandler(BaseHandler): # non-outliers assert not event.internal_metadata.is_outlier() - context = await self.state_handler.compute_event_context(event) - # We store these one at a time since each event depends on the # previous to work out the state. # TODO: We can probably do something more clever here. - await self._auth_and_persist_event(dest, event, context, backfilled=True) + await self._handle_new_event(dest, event, backfilled=True) return events @@ -1368,7 +1360,7 @@ class FederationHandler(BaseHandler): event_infos.append(_NewEventInfo(event, None, auth)) - await self._auth_and_persist_events( + await self._handle_new_events( destination, room_id, event_infos, @@ -1674,47 +1666,16 @@ class FederationHandler(BaseHandler): # would introduce the danger of backwards-compatibility problems. event.internal_metadata.send_on_behalf_of = origin - # Calculate the event context. - context = await self.state_handler.compute_event_context(event) - - # Get the state before the new event. - prev_state_ids = await context.get_prev_state_ids() - - # Check if the user is already in the room or invited to the room. - user_id = event.state_key - prev_member_event_id = prev_state_ids.get((EventTypes.Member, user_id), None) - newly_joined = True - is_invite = False - if prev_member_event_id: - prev_member_event = await self.store.get_event(prev_member_event_id) - newly_joined = prev_member_event.membership != Membership.JOIN - is_invite = prev_member_event.membership == Membership.INVITE - - # If the member is not already in the room, and not invited, check if - # they should be allowed access via membership in a space. - if ( - newly_joined - and not is_invite - and not await self.event_auth_handler.can_join_without_invite( - prev_state_ids, - event.room_version, - user_id, - ) - ): - raise SynapseError( - 400, - "You do not belong to any of the required spaces to join this room.", - ) - - # Persist the event. - await self._auth_and_persist_event(origin, event, context) + context = await self._handle_new_event(origin, event) logger.debug( - "on_send_join_request: After _auth_and_persist_event: %s, sigs: %s", + "on_send_join_request: After _handle_new_event: %s, sigs: %s", event.event_id, event.signatures, ) + prev_state_ids = await context.get_prev_state_ids() + state_ids = list(prev_state_ids.values()) auth_chain = await self.store.get_auth_chain(event.room_id, state_ids) @@ -1917,11 +1878,10 @@ class FederationHandler(BaseHandler): event.internal_metadata.outlier = False - context = await self.state_handler.compute_event_context(event) - await self._auth_and_persist_event(origin, event, context) + await self._handle_new_event(origin, event) logger.debug( - "on_send_leave_request: After _auth_and_persist_event: %s, sigs: %s", + "on_send_leave_request: After _handle_new_event: %s, sigs: %s", event.event_id, event.signatures, ) @@ -2029,44 +1989,16 @@ class FederationHandler(BaseHandler): async def get_min_depth_for_context(self, context: str) -> int: return await self.store.get_min_depth(context) - async def _auth_and_persist_event( + async def _handle_new_event( self, origin: str, event: EventBase, - context: EventContext, state: Optional[Iterable[EventBase]] = None, auth_events: Optional[MutableStateMap[EventBase]] = None, backfilled: bool = False, - ) -> None: - """ - Process an event by performing auth checks and then persisting to the database. - - Args: - origin: The host the event originates from. - event: The event itself. - context: - The event context. - - NB that this function potentially modifies it. - state: - The state events used to auth the event. If this is not provided - the current state events will be used. - auth_events: - Map from (event_type, state_key) to event - - Normally, our calculated auth_events based on the state of the room - at the event's position in the DAG, though occasionally (eg if the - event is an outlier), may be the auth events claimed by the remote - server. - backfilled: True if the event was backfilled. - """ - context = await self._check_event_auth( - origin, - event, - context, - state=state, - auth_events=auth_events, - backfilled=backfilled, + ) -> EventContext: + context = await self._prep_event( + origin, event, state=state, auth_events=auth_events, backfilled=backfilled ) try: @@ -2088,7 +2020,9 @@ class FederationHandler(BaseHandler): ) raise - async def _auth_and_persist_events( + return context + + async def _handle_new_events( self, origin: str, room_id: str, @@ -2106,13 +2040,9 @@ class FederationHandler(BaseHandler): async def prep(ev_info: _NewEventInfo): event = ev_info.event with nested_logging_context(suffix=event.event_id): - res = await self.state_handler.compute_event_context( - event, old_state=ev_info.state - ) - res = await self._check_event_auth( + res = await self._prep_event( origin, event, - res, state=ev_info.state, auth_events=ev_info.auth_events, backfilled=backfilled, @@ -2247,6 +2177,49 @@ class FederationHandler(BaseHandler): room_id, [(event, new_event_context)] ) + async def _prep_event( + self, + origin: str, + event: EventBase, + state: Optional[Iterable[EventBase]], + auth_events: Optional[MutableStateMap[EventBase]], + backfilled: bool, + ) -> EventContext: + context = await self.state_handler.compute_event_context(event, old_state=state) + + if not auth_events: + prev_state_ids = await context.get_prev_state_ids() + auth_events_ids = self.auth.compute_auth_events( + event, prev_state_ids, for_verification=True + ) + auth_events_x = await self.store.get_events(auth_events_ids) + auth_events = {(e.type, e.state_key): e for e in auth_events_x.values()} + + # This is a hack to fix some old rooms where the initial join event + # didn't reference the create event in its auth events. + if event.type == EventTypes.Member and not event.auth_event_ids(): + if len(event.prev_event_ids()) == 1 and event.depth < 5: + c = await self.store.get_event( + event.prev_event_ids()[0], allow_none=True + ) + if c and c.type == EventTypes.Create: + auth_events[(c.type, c.state_key)] = c + + context = await self.do_auth(origin, event, context, auth_events=auth_events) + + if not context.rejected: + await self._check_for_soft_fail(event, state, backfilled) + + if event.type == EventTypes.GuestAccess and not context.rejected: + await self.maybe_kick_guest_users(event) + + # If we are going to send this event over federation we precaclculate + # the joined hosts. + if event.internal_metadata.get_send_on_behalf_of(): + await self.event_creation_handler.cache_joined_hosts_for_event(event) + + return context + async def _check_for_soft_fail( self, event: EventBase, state: Optional[Iterable[EventBase]], backfilled: bool ) -> None: @@ -2357,28 +2330,19 @@ class FederationHandler(BaseHandler): return missing_events - async def _check_event_auth( + async def do_auth( self, origin: str, event: EventBase, context: EventContext, - state: Optional[Iterable[EventBase]], - auth_events: Optional[MutableStateMap[EventBase]], - backfilled: bool, + auth_events: MutableStateMap[EventBase], ) -> EventContext: """ - Checks whether an event should be rejected (for failing auth checks). Args: - origin: The host the event originates from. - event: The event itself. + origin: + event: context: - The event context. - - NB that this function potentially modifies it. - state: - The state events used to auth the event. If this is not provided - the current state events will be used. auth_events: Map from (event_type, state_key) to event @@ -2388,32 +2352,12 @@ class FederationHandler(BaseHandler): server. Also NB that this function adds entries to it. - backfilled: True if the event was backfilled. - Returns: - The updated context object. + updated context object """ room_version = await self.store.get_room_version_id(event.room_id) room_version_obj = KNOWN_ROOM_VERSIONS[room_version] - if not auth_events: - prev_state_ids = await context.get_prev_state_ids() - auth_events_ids = self.auth.compute_auth_events( - event, prev_state_ids, for_verification=True - ) - auth_events_x = await self.store.get_events(auth_events_ids) - auth_events = {(e.type, e.state_key): e for e in auth_events_x.values()} - - # This is a hack to fix some old rooms where the initial join event - # didn't reference the create event in its auth events. - if event.type == EventTypes.Member and not event.auth_event_ids(): - if len(event.prev_event_ids()) == 1 and event.depth < 5: - c = await self.store.get_event( - event.prev_event_ids()[0], allow_none=True - ) - if c and c.type == EventTypes.Create: - auth_events[(c.type, c.state_key)] = c - try: context = await self._update_auth_events_and_context_for_auth( origin, event, context, auth_events @@ -2435,17 +2379,6 @@ class FederationHandler(BaseHandler): logger.warning("Failed auth resolution for %r because %s", event, e) context.rejected = RejectedReason.AUTH_ERROR - if not context.rejected: - await self._check_for_soft_fail(event, state, backfilled) - - if event.type == EventTypes.GuestAccess and not context.rejected: - await self.maybe_kick_guest_users(event) - - # If we are going to send this event over federation we precaclculate - # the joined hosts. - if event.internal_metadata.get_send_on_behalf_of(): - await self.event_creation_handler.cache_joined_hosts_for_event(event) - return context async def _update_auth_events_and_context_for_auth( @@ -2455,7 +2388,7 @@ class FederationHandler(BaseHandler): context: EventContext, auth_events: MutableStateMap[EventBase], ) -> EventContext: - """Helper for _check_event_auth. See there for docs. + """Helper for do_auth. See there for docs. Checks whether a given event has the expected auth events. If it doesn't then we talk to the remote server to compare state to see if @@ -2535,14 +2468,9 @@ class FederationHandler(BaseHandler): e.internal_metadata.outlier = True logger.debug( - "_check_event_auth %s missing_auth: %s", - event.event_id, - e.event_id, - ) - context = await self.state_handler.compute_event_context(e) - await self._auth_and_persist_event( - origin, e, context, auth_events=auth + "do_auth %s missing_auth: %s", event.event_id, e.event_id ) + await self._handle_new_event(origin, e, auth_events=auth) if e.event_id in event_auth_events: auth_events[(e.type, e.state_key)] = e diff --git a/synapse/handlers/room_member.py b/synapse/handlers/room_member.py index 2c5bada1d8..2bbfac6471 100644 --- a/synapse/handlers/room_member.py +++ b/synapse/handlers/room_member.py @@ -19,7 +19,7 @@ from http import HTTPStatus from typing import TYPE_CHECKING, Iterable, List, Optional, Tuple from synapse import types -from synapse.api.constants import AccountDataTypes, EventTypes, Membership +from synapse.api.constants import AccountDataTypes, EventTypes, JoinRules, Membership from synapse.api.errors import ( AuthError, Codes, @@ -28,6 +28,7 @@ from synapse.api.errors import ( SynapseError, ) from synapse.api.ratelimiting import Ratelimiter +from synapse.api.room_versions import RoomVersion from synapse.events import EventBase from synapse.events.snapshot import EventContext from synapse.types import JsonDict, Requester, RoomAlias, RoomID, StateMap, UserID @@ -63,7 +64,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta): self.profile_handler = hs.get_profile_handler() self.event_creation_handler = hs.get_event_creation_handler() self.account_data_handler = hs.get_account_data_handler() - self.event_auth_handler = hs.get_event_auth_handler() self.member_linearizer = Linearizer(name="member") @@ -178,6 +178,62 @@ class RoomMemberHandler(metaclass=abc.ABCMeta): await self._invites_per_user_limiter.ratelimit(requester, invitee_user_id) + async def _can_join_without_invite( + self, state_ids: StateMap[str], room_version: RoomVersion, user_id: str + ) -> bool: + """ + Check whether a user can join a room without an invite. + + When joining a room with restricted joined rules (as defined in MSC3083), + the membership of spaces must be checked during join. + + Args: + state_ids: The state of the room as it currently is. + room_version: The room version of the room being joined. + user_id: The user joining the room. + + Returns: + True if the user can join the room, false otherwise. + """ + # This only applies to room versions which support the new join rule. + if not room_version.msc3083_join_rules: + return True + + # If there's no join rule, then it defaults to public (so this doesn't apply). + join_rules_event_id = state_ids.get((EventTypes.JoinRules, ""), None) + if not join_rules_event_id: + return True + + # If the join rule is not restricted, this doesn't apply. + join_rules_event = await self.store.get_event(join_rules_event_id) + if join_rules_event.content.get("join_rule") != JoinRules.MSC3083_RESTRICTED: + return True + + # If allowed is of the wrong form, then only allow invited users. + allowed_spaces = join_rules_event.content.get("allow", []) + if not isinstance(allowed_spaces, list): + return False + + # Get the list of joined rooms and see if there's an overlap. + joined_rooms = await self.store.get_rooms_for_user(user_id) + + # Pull out the other room IDs, invalid data gets filtered. + for space in allowed_spaces: + if not isinstance(space, dict): + continue + + space_id = space.get("space") + if not isinstance(space_id, str): + continue + + # The user was joined to one of the spaces specified, they can join + # this room! + if space_id in joined_rooms: + return True + + # The user was not in any of the required spaces. + return False + async def _local_membership_update( self, requester: Requester, @@ -246,7 +302,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta): if ( newly_joined and not user_is_invited - and not await self.event_auth_handler.can_join_without_invite( + and not await self._can_join_without_invite( prev_state_ids, event.room_version, user_id ) ): diff --git a/synapse/server.py b/synapse/server.py index 045b8f3fca..95a2cd2e5d 100644 --- a/synapse/server.py +++ b/synapse/server.py @@ -77,7 +77,6 @@ from synapse.handlers.devicemessage import DeviceMessageHandler from synapse.handlers.directory import DirectoryHandler from synapse.handlers.e2e_keys import E2eKeysHandler from synapse.handlers.e2e_room_keys import E2eRoomKeysHandler -from synapse.handlers.event_auth import EventAuthHandler from synapse.handlers.events import EventHandler, EventStreamHandler from synapse.handlers.federation import FederationHandler from synapse.handlers.groups_local import GroupsLocalHandler, GroupsLocalWorkerHandler @@ -750,10 +749,6 @@ class HomeServer(metaclass=abc.ABCMeta): def get_space_summary_handler(self) -> SpaceSummaryHandler: return SpaceSummaryHandler(self) - @cache_in_self - def get_event_auth_handler(self) -> EventAuthHandler: - return EventAuthHandler(self) - @cache_in_self def get_external_cache(self) -> ExternalCache: return ExternalCache(self) diff --git a/tests/test_federation.py b/tests/test_federation.py index 0a3a996ec1..86a44a13da 100644 --- a/tests/test_federation.py +++ b/tests/test_federation.py @@ -75,10 +75,8 @@ class MessageAcceptTests(unittest.HomeserverTestCase): ) self.handler = self.homeserver.get_federation_handler() - self.handler._check_event_auth = ( - lambda origin, event, context, state, auth_events, backfilled: succeed( - context - ) + self.handler.do_auth = lambda origin, event, context, auth_events: succeed( + context ) self.client = self.homeserver.get_federation_client() self.client._check_sigs_and_hash_and_fetch = lambda dest, pdus, **k: succeed( -- cgit 1.5.1 From 936e69825ab684ca580edb6038e86fdb2e561776 Mon Sep 17 00:00:00 2001 From: Patrick Cloke Date: Wed, 14 Apr 2021 12:35:28 -0400 Subject: Separate creating an event context from persisting it in the federation handler (#9800) This refactoring allows adding logic that uses the event context before persisting it. --- changelog.d/9800.feature | 1 + synapse/handlers/federation.py | 178 ++++++++++++++++++++++++++--------------- tests/test_federation.py | 6 +- 3 files changed, 118 insertions(+), 67 deletions(-) create mode 100644 changelog.d/9800.feature diff --git a/changelog.d/9800.feature b/changelog.d/9800.feature new file mode 100644 index 0000000000..9404ad2fc0 --- /dev/null +++ b/changelog.d/9800.feature @@ -0,0 +1 @@ +Update experimental support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083): restricting room access via group membership. diff --git a/synapse/handlers/federation.py b/synapse/handlers/federation.py index fe1d83f6b8..4b3730aa3b 100644 --- a/synapse/handlers/federation.py +++ b/synapse/handlers/federation.py @@ -103,7 +103,7 @@ logger = logging.getLogger(__name__) @attr.s(slots=True) class _NewEventInfo: - """Holds information about a received event, ready for passing to _handle_new_events + """Holds information about a received event, ready for passing to _auth_and_persist_events Attributes: event: the received event @@ -807,7 +807,10 @@ class FederationHandler(BaseHandler): logger.debug("Processing event: %s", event) try: - await self._handle_new_event(origin, event, state=state) + context = await self.state_handler.compute_event_context( + event, old_state=state + ) + await self._auth_and_persist_event(origin, event, context, state=state) except AuthError as e: raise FederationError("ERROR", e.code, e.msg, affected=event.event_id) @@ -1010,7 +1013,9 @@ class FederationHandler(BaseHandler): ) if ev_infos: - await self._handle_new_events(dest, room_id, ev_infos, backfilled=True) + await self._auth_and_persist_events( + dest, room_id, ev_infos, backfilled=True + ) # Step 2: Persist the rest of the events in the chunk one by one events.sort(key=lambda e: e.depth) @@ -1023,10 +1028,12 @@ class FederationHandler(BaseHandler): # non-outliers assert not event.internal_metadata.is_outlier() + context = await self.state_handler.compute_event_context(event) + # We store these one at a time since each event depends on the # previous to work out the state. # TODO: We can probably do something more clever here. - await self._handle_new_event(dest, event, backfilled=True) + await self._auth_and_persist_event(dest, event, context, backfilled=True) return events @@ -1360,7 +1367,7 @@ class FederationHandler(BaseHandler): event_infos.append(_NewEventInfo(event, None, auth)) - await self._handle_new_events( + await self._auth_and_persist_events( destination, room_id, event_infos, @@ -1666,10 +1673,11 @@ class FederationHandler(BaseHandler): # would introduce the danger of backwards-compatibility problems. event.internal_metadata.send_on_behalf_of = origin - context = await self._handle_new_event(origin, event) + context = await self.state_handler.compute_event_context(event) + context = await self._auth_and_persist_event(origin, event, context) logger.debug( - "on_send_join_request: After _handle_new_event: %s, sigs: %s", + "on_send_join_request: After _auth_and_persist_event: %s, sigs: %s", event.event_id, event.signatures, ) @@ -1878,10 +1886,11 @@ class FederationHandler(BaseHandler): event.internal_metadata.outlier = False - await self._handle_new_event(origin, event) + context = await self.state_handler.compute_event_context(event) + await self._auth_and_persist_event(origin, event, context) logger.debug( - "on_send_leave_request: After _handle_new_event: %s, sigs: %s", + "on_send_leave_request: After _auth_and_persist_event: %s, sigs: %s", event.event_id, event.signatures, ) @@ -1989,16 +1998,47 @@ class FederationHandler(BaseHandler): async def get_min_depth_for_context(self, context: str) -> int: return await self.store.get_min_depth(context) - async def _handle_new_event( + async def _auth_and_persist_event( self, origin: str, event: EventBase, + context: EventContext, state: Optional[Iterable[EventBase]] = None, auth_events: Optional[MutableStateMap[EventBase]] = None, backfilled: bool = False, ) -> EventContext: - context = await self._prep_event( - origin, event, state=state, auth_events=auth_events, backfilled=backfilled + """ + Process an event by performing auth checks and then persisting to the database. + + Args: + origin: The host the event originates from. + event: The event itself. + context: + The event context. + + NB that this function potentially modifies it. + state: + The state events used to check the event for soft-fail. If this is + not provided the current state events will be used. + auth_events: + Map from (event_type, state_key) to event + + Normally, our calculated auth_events based on the state of the room + at the event's position in the DAG, though occasionally (eg if the + event is an outlier), may be the auth events claimed by the remote + server. + backfilled: True if the event was backfilled. + + Returns: + The event context. + """ + context = await self._check_event_auth( + origin, + event, + context, + state=state, + auth_events=auth_events, + backfilled=backfilled, ) try: @@ -2022,7 +2062,7 @@ class FederationHandler(BaseHandler): return context - async def _handle_new_events( + async def _auth_and_persist_events( self, origin: str, room_id: str, @@ -2040,9 +2080,13 @@ class FederationHandler(BaseHandler): async def prep(ev_info: _NewEventInfo): event = ev_info.event with nested_logging_context(suffix=event.event_id): - res = await self._prep_event( + res = await self.state_handler.compute_event_context( + event, old_state=ev_info.state + ) + res = await self._check_event_auth( origin, event, + res, state=ev_info.state, auth_events=ev_info.auth_events, backfilled=backfilled, @@ -2177,49 +2221,6 @@ class FederationHandler(BaseHandler): room_id, [(event, new_event_context)] ) - async def _prep_event( - self, - origin: str, - event: EventBase, - state: Optional[Iterable[EventBase]], - auth_events: Optional[MutableStateMap[EventBase]], - backfilled: bool, - ) -> EventContext: - context = await self.state_handler.compute_event_context(event, old_state=state) - - if not auth_events: - prev_state_ids = await context.get_prev_state_ids() - auth_events_ids = self.auth.compute_auth_events( - event, prev_state_ids, for_verification=True - ) - auth_events_x = await self.store.get_events(auth_events_ids) - auth_events = {(e.type, e.state_key): e for e in auth_events_x.values()} - - # This is a hack to fix some old rooms where the initial join event - # didn't reference the create event in its auth events. - if event.type == EventTypes.Member and not event.auth_event_ids(): - if len(event.prev_event_ids()) == 1 and event.depth < 5: - c = await self.store.get_event( - event.prev_event_ids()[0], allow_none=True - ) - if c and c.type == EventTypes.Create: - auth_events[(c.type, c.state_key)] = c - - context = await self.do_auth(origin, event, context, auth_events=auth_events) - - if not context.rejected: - await self._check_for_soft_fail(event, state, backfilled) - - if event.type == EventTypes.GuestAccess and not context.rejected: - await self.maybe_kick_guest_users(event) - - # If we are going to send this event over federation we precaclculate - # the joined hosts. - if event.internal_metadata.get_send_on_behalf_of(): - await self.event_creation_handler.cache_joined_hosts_for_event(event) - - return context - async def _check_for_soft_fail( self, event: EventBase, state: Optional[Iterable[EventBase]], backfilled: bool ) -> None: @@ -2330,19 +2331,28 @@ class FederationHandler(BaseHandler): return missing_events - async def do_auth( + async def _check_event_auth( self, origin: str, event: EventBase, context: EventContext, - auth_events: MutableStateMap[EventBase], + state: Optional[Iterable[EventBase]], + auth_events: Optional[MutableStateMap[EventBase]], + backfilled: bool, ) -> EventContext: """ + Checks whether an event should be rejected (for failing auth checks). Args: - origin: - event: + origin: The host the event originates from. + event: The event itself. context: + The event context. + + NB that this function potentially modifies it. + state: + The state events used to check the event for soft-fail. If this is + not provided the current state events will be used. auth_events: Map from (event_type, state_key) to event @@ -2352,12 +2362,34 @@ class FederationHandler(BaseHandler): server. Also NB that this function adds entries to it. + + If this is not provided, it is calculated from the previous state IDs. + backfilled: True if the event was backfilled. + Returns: - updated context object + The updated context object. """ room_version = await self.store.get_room_version_id(event.room_id) room_version_obj = KNOWN_ROOM_VERSIONS[room_version] + if not auth_events: + prev_state_ids = await context.get_prev_state_ids() + auth_events_ids = self.auth.compute_auth_events( + event, prev_state_ids, for_verification=True + ) + auth_events_x = await self.store.get_events(auth_events_ids) + auth_events = {(e.type, e.state_key): e for e in auth_events_x.values()} + + # This is a hack to fix some old rooms where the initial join event + # didn't reference the create event in its auth events. + if event.type == EventTypes.Member and not event.auth_event_ids(): + if len(event.prev_event_ids()) == 1 and event.depth < 5: + c = await self.store.get_event( + event.prev_event_ids()[0], allow_none=True + ) + if c and c.type == EventTypes.Create: + auth_events[(c.type, c.state_key)] = c + try: context = await self._update_auth_events_and_context_for_auth( origin, event, context, auth_events @@ -2379,6 +2411,17 @@ class FederationHandler(BaseHandler): logger.warning("Failed auth resolution for %r because %s", event, e) context.rejected = RejectedReason.AUTH_ERROR + if not context.rejected: + await self._check_for_soft_fail(event, state, backfilled) + + if event.type == EventTypes.GuestAccess and not context.rejected: + await self.maybe_kick_guest_users(event) + + # If we are going to send this event over federation we precaclculate + # the joined hosts. + if event.internal_metadata.get_send_on_behalf_of(): + await self.event_creation_handler.cache_joined_hosts_for_event(event) + return context async def _update_auth_events_and_context_for_auth( @@ -2388,7 +2431,7 @@ class FederationHandler(BaseHandler): context: EventContext, auth_events: MutableStateMap[EventBase], ) -> EventContext: - """Helper for do_auth. See there for docs. + """Helper for _check_event_auth. See there for docs. Checks whether a given event has the expected auth events. If it doesn't then we talk to the remote server to compare state to see if @@ -2468,9 +2511,14 @@ class FederationHandler(BaseHandler): e.internal_metadata.outlier = True logger.debug( - "do_auth %s missing_auth: %s", event.event_id, e.event_id + "_check_event_auth %s missing_auth: %s", + event.event_id, + e.event_id, + ) + context = await self.state_handler.compute_event_context(e) + await self._auth_and_persist_event( + origin, e, context, auth_events=auth ) - await self._handle_new_event(origin, e, auth_events=auth) if e.event_id in event_auth_events: auth_events[(e.type, e.state_key)] = e diff --git a/tests/test_federation.py b/tests/test_federation.py index 86a44a13da..0a3a996ec1 100644 --- a/tests/test_federation.py +++ b/tests/test_federation.py @@ -75,8 +75,10 @@ class MessageAcceptTests(unittest.HomeserverTestCase): ) self.handler = self.homeserver.get_federation_handler() - self.handler.do_auth = lambda origin, event, context, auth_events: succeed( - context + self.handler._check_event_auth = ( + lambda origin, event, context, state, auth_events, backfilled: succeed( + context + ) ) self.client = self.homeserver.get_federation_client() self.client._check_sigs_and_hash_and_fetch = lambda dest, pdus, **k: succeed( -- cgit 1.5.1 From 5a153772c197a689df6c087e49d7bd8beee5dbdd Mon Sep 17 00:00:00 2001 From: Richard van der Hoff <1389908+richvdh@users.noreply.github.com> Date: Wed, 14 Apr 2021 19:09:08 +0100 Subject: remove `HomeServer.get_config` (#9815) Every single time I want to access the config object, I have to remember whether or not we use `get_config`. Let's just get rid of it. --- changelog.d/9815.misc | 1 + synapse/app/generic_worker.py | 2 +- synapse/app/homeserver.py | 18 +++++++++--------- synapse/crypto/keyring.py | 2 +- synapse/federation/federation_server.py | 2 +- synapse/federation/sender/transaction_manager.py | 2 +- synapse/rest/media/v1/config_resource.py | 2 +- synapse/server.py | 3 --- tests/app/test_openid_listener.py | 2 +- 9 files changed, 16 insertions(+), 18 deletions(-) create mode 100644 changelog.d/9815.misc diff --git a/changelog.d/9815.misc b/changelog.d/9815.misc new file mode 100644 index 0000000000..e33d012d3d --- /dev/null +++ b/changelog.d/9815.misc @@ -0,0 +1 @@ +Replace `HomeServer.get_config()` with inline references. diff --git a/synapse/app/generic_worker.py b/synapse/app/generic_worker.py index 28e3b1aa3c..26c458dbb6 100644 --- a/synapse/app/generic_worker.py +++ b/synapse/app/generic_worker.py @@ -405,7 +405,7 @@ class GenericWorkerServer(HomeServer): listener.bind_addresses, listener.port, manhole_globals={"hs": self} ) elif listener.type == "metrics": - if not self.get_config().enable_metrics: + if not self.config.enable_metrics: logger.warning( ( "Metrics listener configured, but " diff --git a/synapse/app/homeserver.py b/synapse/app/homeserver.py index 679b7f4289..8be8b520eb 100644 --- a/synapse/app/homeserver.py +++ b/synapse/app/homeserver.py @@ -191,7 +191,7 @@ class SynapseHomeServer(HomeServer): } ) - if self.get_config().threepid_behaviour_email == ThreepidBehaviour.LOCAL: + if self.config.threepid_behaviour_email == ThreepidBehaviour.LOCAL: from synapse.rest.synapse.client.password_reset import ( PasswordResetSubmitTokenResource, ) @@ -230,7 +230,7 @@ class SynapseHomeServer(HomeServer): ) if name in ["media", "federation", "client"]: - if self.get_config().enable_media_repo: + if self.config.enable_media_repo: media_repo = self.get_media_repository_resource() resources.update( {MEDIA_PREFIX: media_repo, LEGACY_MEDIA_PREFIX: media_repo} @@ -244,7 +244,7 @@ class SynapseHomeServer(HomeServer): resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self) if name == "webclient": - webclient_loc = self.get_config().web_client_location + webclient_loc = self.config.web_client_location if webclient_loc is None: logger.warning( @@ -265,7 +265,7 @@ class SynapseHomeServer(HomeServer): # https://twistedmatrix.com/trac/ticket/7678 resources[WEB_CLIENT_PREFIX] = File(webclient_loc) - if name == "metrics" and self.get_config().enable_metrics: + if name == "metrics" and self.config.enable_metrics: resources[METRICS_PREFIX] = MetricsResource(RegistryProxy) if name == "replication": @@ -274,9 +274,7 @@ class SynapseHomeServer(HomeServer): return resources def start_listening(self, listeners: Iterable[ListenerConfig]): - config = self.get_config() - - if config.redis_enabled: + if self.config.redis_enabled: # If redis is enabled we connect via the replication command handler # in the same way as the workers (since we're effectively a client # rather than a server). @@ -284,7 +282,9 @@ class SynapseHomeServer(HomeServer): for listener in listeners: if listener.type == "http": - self._listening_services.extend(self._listener_http(config, listener)) + self._listening_services.extend( + self._listener_http(self.config, listener) + ) elif listener.type == "manhole": _base.listen_manhole( listener.bind_addresses, listener.port, manhole_globals={"hs": self} @@ -298,7 +298,7 @@ class SynapseHomeServer(HomeServer): for s in services: reactor.addSystemEventTrigger("before", "shutdown", s.stopListening) elif listener.type == "metrics": - if not self.get_config().enable_metrics: + if not self.config.enable_metrics: logger.warning( ( "Metrics listener configured, but " diff --git a/synapse/crypto/keyring.py b/synapse/crypto/keyring.py index 40073dc7c2..5f18ef7748 100644 --- a/synapse/crypto/keyring.py +++ b/synapse/crypto/keyring.py @@ -501,7 +501,7 @@ class StoreKeyFetcher(KeyFetcher): class BaseV2KeyFetcher(KeyFetcher): def __init__(self, hs: "HomeServer"): self.store = hs.get_datastore() - self.config = hs.get_config() + self.config = hs.config async def process_v2_response( self, from_server: str, response_json: JsonDict, time_added_ms: int diff --git a/synapse/federation/federation_server.py b/synapse/federation/federation_server.py index 3ff6479cfb..b729a69203 100644 --- a/synapse/federation/federation_server.py +++ b/synapse/federation/federation_server.py @@ -136,7 +136,7 @@ class FederationServer(FederationBase): ) # type: ResponseCache[Tuple[str, str]] self._federation_metrics_domains = ( - hs.get_config().federation.federation_metrics_domains + hs.config.federation.federation_metrics_domains ) async def on_backfill_request( diff --git a/synapse/federation/sender/transaction_manager.py b/synapse/federation/sender/transaction_manager.py index 12fe3a719b..72a635830b 100644 --- a/synapse/federation/sender/transaction_manager.py +++ b/synapse/federation/sender/transaction_manager.py @@ -56,7 +56,7 @@ class TransactionManager: self._transport_layer = hs.get_federation_transport_client() self._federation_metrics_domains = ( - hs.get_config().federation.federation_metrics_domains + hs.config.federation.federation_metrics_domains ) # HACK to get unique tx id diff --git a/synapse/rest/media/v1/config_resource.py b/synapse/rest/media/v1/config_resource.py index b20c29f007..a1d36e5cf1 100644 --- a/synapse/rest/media/v1/config_resource.py +++ b/synapse/rest/media/v1/config_resource.py @@ -30,7 +30,7 @@ class MediaConfigResource(DirectServeJsonResource): def __init__(self, hs: "HomeServer"): super().__init__() - config = hs.get_config() + config = hs.config self.clock = hs.get_clock() self.auth = hs.get_auth() self.limits_dict = {"m.upload.size": config.max_upload_size} diff --git a/synapse/server.py b/synapse/server.py index 95a2cd2e5d..42d2fad8e8 100644 --- a/synapse/server.py +++ b/synapse/server.py @@ -323,9 +323,6 @@ class HomeServer(metaclass=abc.ABCMeta): return self.datastores - def get_config(self) -> HomeServerConfig: - return self.config - @cache_in_self def get_distributor(self) -> Distributor: return Distributor() diff --git a/tests/app/test_openid_listener.py b/tests/app/test_openid_listener.py index 276f09015e..264e101082 100644 --- a/tests/app/test_openid_listener.py +++ b/tests/app/test_openid_listener.py @@ -109,7 +109,7 @@ class SynapseHomeserverOpenIDListenerTests(HomeserverTestCase): } # Listen with the config - self.hs._listener_http(self.hs.get_config(), parse_listener_def(config)) + self.hs._listener_http(self.hs.config, parse_listener_def(config)) # Grab the resource from the site that was told to listen site = self.reactor.tcpServers[0][1] -- cgit 1.5.1 From 601b893352838c1391da083e8edde62904d23208 Mon Sep 17 00:00:00 2001 From: Erik Johnston Date: Fri, 16 Apr 2021 14:44:55 +0100 Subject: Small speed up joining large remote rooms (#9825) There are a couple of points in `persist_events` where we are doing a query per event in series, which we can replace. --- changelog.d/9825.misc | 1 + synapse/storage/databases/main/events.py | 54 +++++++++++++++++++------------- 2 files changed, 34 insertions(+), 21 deletions(-) create mode 100644 changelog.d/9825.misc diff --git a/changelog.d/9825.misc b/changelog.d/9825.misc new file mode 100644 index 0000000000..42f3f15619 --- /dev/null +++ b/changelog.d/9825.misc @@ -0,0 +1 @@ +Small speed up for joining large remote rooms. diff --git a/synapse/storage/databases/main/events.py b/synapse/storage/databases/main/events.py index bed4326d11..a362521e20 100644 --- a/synapse/storage/databases/main/events.py +++ b/synapse/storage/databases/main/events.py @@ -1378,17 +1378,21 @@ class PersistEventsStore: ], ) - for event, _ in events_and_contexts: - if not event.internal_metadata.is_redacted(): - # If we're persisting an unredacted event we go and ensure - # that we mark any redactions that reference this event as - # requiring censoring. - self.db_pool.simple_update_txn( - txn, - table="redactions", - keyvalues={"redacts": event.event_id}, - updatevalues={"have_censored": False}, + # If we're persisting an unredacted event we go and ensure + # that we mark any redactions that reference this event as + # requiring censoring. + sql = "UPDATE redactions SET have_censored = ? WHERE redacts = ?" + txn.execute_batch( + sql, + ( + ( + False, + event.event_id, ) + for event, _ in events_and_contexts + if not event.internal_metadata.is_redacted() + ), + ) state_events_and_contexts = [ ec for ec in events_and_contexts if ec[0].is_state() @@ -1881,20 +1885,28 @@ class PersistEventsStore: ), ) - for event, _ in events_and_contexts: - user_ids = self.db_pool.simple_select_onecol_txn( - txn, - table="event_push_actions_staging", - keyvalues={"event_id": event.event_id}, - retcol="user_id", - ) + room_to_event_ids = {} # type: Dict[str, List[str]] + for e, _ in events_and_contexts: + room_to_event_ids.setdefault(e.room_id, []).append(e.event_id) - for uid in user_ids: - txn.call_after( - self.store.get_unread_event_push_actions_by_room_for_user.invalidate_many, - (event.room_id, uid), + for room_id, event_ids in room_to_event_ids.items(): + rows = self.db_pool.simple_select_many_txn( + txn, + table="event_push_actions_staging", + column="event_id", + iterable=event_ids, + keyvalues={}, + retcols=("user_id",), ) + user_ids = {row["user_id"] for row in rows} + + for user_id in user_ids: + txn.call_after( + self.store.get_unread_event_push_actions_by_room_for_user.invalidate_many, + (room_id, user_id), + ) + # Now we delete the staging area for *all* events that were being # persisted. txn.execute_batch( -- cgit 1.5.1 From c571736c6ca5d1d2d9bf7cd9b717465d446ac7b3 Mon Sep 17 00:00:00 2001 From: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com> Date: Fri, 16 Apr 2021 18:17:18 +0100 Subject: User directory: use calculated room membership state instead (#9821) Fixes: #9797. Should help reduce CPU usage on the user directory, especially when memberships change in rooms with lots of state history. --- changelog.d/9821.misc | 1 + synapse/handlers/user_directory.py | 15 ++++++++------- synapse/storage/databases/main/roommember.py | 27 +++++++++++++++++++++++++++ 3 files changed, 36 insertions(+), 7 deletions(-) create mode 100644 changelog.d/9821.misc diff --git a/changelog.d/9821.misc b/changelog.d/9821.misc new file mode 100644 index 0000000000..03b2d2ed4d --- /dev/null +++ b/changelog.d/9821.misc @@ -0,0 +1 @@ +Reduce CPU usage of the user directory by reusing existing calculated room membership. \ No newline at end of file diff --git a/synapse/handlers/user_directory.py b/synapse/handlers/user_directory.py index 9b1e6d5c18..dacc4f3076 100644 --- a/synapse/handlers/user_directory.py +++ b/synapse/handlers/user_directory.py @@ -44,7 +44,6 @@ class UserDirectoryHandler(StateDeltasHandler): super().__init__(hs) self.store = hs.get_datastore() - self.state = hs.get_state_handler() self.server_name = hs.hostname self.clock = hs.get_clock() self.notifier = hs.get_notifier() @@ -302,10 +301,12 @@ class UserDirectoryHandler(StateDeltasHandler): # ignore the change return - users_with_profile = await self.state.get_current_users_in_room(room_id) + other_users_in_room_with_profiles = ( + await self.store.get_users_in_room_with_profiles(room_id) + ) # Remove every user from the sharing tables for that room. - for user_id in users_with_profile.keys(): + for user_id in other_users_in_room_with_profiles.keys(): await self.store.remove_user_who_share_room(user_id, room_id) # Then, re-add them to the tables. @@ -314,7 +315,7 @@ class UserDirectoryHandler(StateDeltasHandler): # which when ran over an entire room, will result in the same values # being added multiple times. The batching upserts shouldn't make this # too bad, though. - for user_id, profile in users_with_profile.items(): + for user_id, profile in other_users_in_room_with_profiles.items(): await self._handle_new_user(room_id, user_id, profile) async def _handle_new_user( @@ -336,7 +337,7 @@ class UserDirectoryHandler(StateDeltasHandler): room_id ) # Now we update users who share rooms with users. - users_with_profile = await self.state.get_current_users_in_room(room_id) + other_users_in_room = await self.store.get_users_in_room(room_id) if is_public: await self.store.add_users_in_public_rooms(room_id, (user_id,)) @@ -352,14 +353,14 @@ class UserDirectoryHandler(StateDeltasHandler): # We don't care about appservice users. if not is_appservice: - for other_user_id in users_with_profile: + for other_user_id in other_users_in_room: if user_id == other_user_id: continue to_insert.add((user_id, other_user_id)) # Next we need to update for every local user in the room - for other_user_id in users_with_profile: + for other_user_id in other_users_in_room: if user_id == other_user_id: continue diff --git a/synapse/storage/databases/main/roommember.py b/synapse/storage/databases/main/roommember.py index ef5587f87a..fd525dce65 100644 --- a/synapse/storage/databases/main/roommember.py +++ b/synapse/storage/databases/main/roommember.py @@ -173,6 +173,33 @@ class RoomMemberWorkerStore(EventsWorkerStore): txn.execute(sql, (room_id, Membership.JOIN)) return [r[0] for r in txn] + @cached(max_entries=100000, iterable=True) + async def get_users_in_room_with_profiles( + self, room_id: str + ) -> Dict[str, ProfileInfo]: + """Get a mapping from user ID to profile information for all users in a given room. + + Args: + room_id: The ID of the room to retrieve the users of. + + Returns: + A mapping from user ID to ProfileInfo. + """ + + def _get_users_in_room_with_profiles(txn) -> Dict[str, ProfileInfo]: + sql = """ + SELECT user_id, display_name, avatar_url FROM room_memberships + WHERE room_id = ? AND membership = ? + """ + txn.execute(sql, (room_id, Membership.JOIN)) + + return {r[0]: ProfileInfo(display_name=r[1], avatar_url=r[2]) for r in txn} + + return await self.db_pool.runInteraction( + "get_users_in_room_with_profiles", + _get_users_in_room_with_profiles, + ) + @cached(max_entries=100000) async def get_room_summary(self, room_id: str) -> Dict[str, MemberSummary]: """Get the details of a room roughly suitable for use by the room -- cgit 1.5.1 From 2b7dd21655b1ed2db490853d2cdbf6fb38704d81 Mon Sep 17 00:00:00 2001 From: Erik Johnston Date: Mon, 19 Apr 2021 10:50:49 +0100 Subject: Don't send normal presence updates over federation replication stream (#9828) --- changelog.d/9828.feature | 1 + synapse/federation/send_queue.py | 70 +------------------------ synapse/federation/sender/__init__.py | 96 +---------------------------------- synapse/handlers/presence.py | 78 ++++++++++++++++++++++------ synapse/module_api/__init__.py | 13 +++-- 5 files changed, 75 insertions(+), 183 deletions(-) create mode 100644 changelog.d/9828.feature diff --git a/changelog.d/9828.feature b/changelog.d/9828.feature new file mode 100644 index 0000000000..f56b0bb3bd --- /dev/null +++ b/changelog.d/9828.feature @@ -0,0 +1 @@ +Add experimental support for handling presence on a worker. diff --git a/synapse/federation/send_queue.py b/synapse/federation/send_queue.py index e3f0bc2471..d71f04e43e 100644 --- a/synapse/federation/send_queue.py +++ b/synapse/federation/send_queue.py @@ -76,9 +76,6 @@ class FederationRemoteSendQueue(AbstractFederationSender): # Pending presence map user_id -> UserPresenceState self.presence_map = {} # type: Dict[str, UserPresenceState] - # Stream position -> list[user_id] - self.presence_changed = SortedDict() # type: SortedDict[int, List[str]] - # Stores the destinations we need to explicitly send presence to about a # given user. # Stream position -> (user_id, destinations) @@ -96,7 +93,7 @@ class FederationRemoteSendQueue(AbstractFederationSender): self.edus = SortedDict() # type: SortedDict[int, Edu] - # stream ID for the next entry into presence_changed/keyed_edu_changed/edus. + # stream ID for the next entry into keyed_edu_changed/edus. self.pos = 1 # map from stream ID to the time that stream entry was generated, so that we @@ -117,7 +114,6 @@ class FederationRemoteSendQueue(AbstractFederationSender): for queue_name in [ "presence_map", - "presence_changed", "keyed_edu", "keyed_edu_changed", "edus", @@ -155,23 +151,12 @@ class FederationRemoteSendQueue(AbstractFederationSender): """Clear all the queues from before a given position""" with Measure(self.clock, "send_queue._clear"): # Delete things out of presence maps - keys = self.presence_changed.keys() - i = self.presence_changed.bisect_left(position_to_delete) - for key in keys[:i]: - del self.presence_changed[key] - - user_ids = { - user_id for uids in self.presence_changed.values() for user_id in uids - } - keys = self.presence_destinations.keys() i = self.presence_destinations.bisect_left(position_to_delete) for key in keys[:i]: del self.presence_destinations[key] - user_ids.update( - user_id for user_id, _ in self.presence_destinations.values() - ) + user_ids = {user_id for user_id, _ in self.presence_destinations.values()} to_del = [ user_id for user_id in self.presence_map if user_id not in user_ids @@ -244,23 +229,6 @@ class FederationRemoteSendQueue(AbstractFederationSender): """ # nothing to do here: the replication listener will handle it. - def send_presence(self, states: List[UserPresenceState]) -> None: - """As per FederationSender - - Args: - states - """ - pos = self._next_pos() - - # We only want to send presence for our own users, so lets always just - # filter here just in case. - local_states = [s for s in states if self.is_mine_id(s.user_id)] - - self.presence_map.update({state.user_id: state for state in local_states}) - self.presence_changed[pos] = [state.user_id for state in local_states] - - self.notifier.on_new_replication_data() - def send_presence_to_destinations( self, states: Iterable[UserPresenceState], destinations: Iterable[str] ) -> None: @@ -325,18 +293,6 @@ class FederationRemoteSendQueue(AbstractFederationSender): # of the federation stream. rows = [] # type: List[Tuple[int, BaseFederationRow]] - # Fetch changed presence - i = self.presence_changed.bisect_right(from_token) - j = self.presence_changed.bisect_right(to_token) + 1 - dest_user_ids = [ - (pos, user_id) - for pos, user_id_list in self.presence_changed.items()[i:j] - for user_id in user_id_list - ] - - for (key, user_id) in dest_user_ids: - rows.append((key, PresenceRow(state=self.presence_map[user_id]))) - # Fetch presence to send to destinations i = self.presence_destinations.bisect_right(from_token) j = self.presence_destinations.bisect_right(to_token) + 1 @@ -427,22 +383,6 @@ class BaseFederationRow: raise NotImplementedError() -class PresenceRow( - BaseFederationRow, namedtuple("PresenceRow", ("state",)) # UserPresenceState -): - TypeId = "p" - - @staticmethod - def from_data(data): - return PresenceRow(state=UserPresenceState.from_dict(data)) - - def to_data(self): - return self.state.as_dict() - - def add_to_buffer(self, buff): - buff.presence.append(self.state) - - class PresenceDestinationsRow( BaseFederationRow, namedtuple( @@ -506,7 +446,6 @@ class EduRow(BaseFederationRow, namedtuple("EduRow", ("edu",))): # Edu _rowtypes = ( - PresenceRow, PresenceDestinationsRow, KeyedEduRow, EduRow, @@ -518,7 +457,6 @@ TypeToRow = {Row.TypeId: Row for Row in _rowtypes} ParsedFederationStreamData = namedtuple( "ParsedFederationStreamData", ( - "presence", # list(UserPresenceState) "presence_destinations", # list of tuples of UserPresenceState and destinations "keyed_edus", # dict of destination -> { key -> Edu } "edus", # dict of destination -> [Edu] @@ -543,7 +481,6 @@ def process_rows_for_federation( # them into the appropriate collection and then send them off. buff = ParsedFederationStreamData( - presence=[], presence_destinations=[], keyed_edus={}, edus={}, @@ -559,9 +496,6 @@ def process_rows_for_federation( parsed_row = RowType.from_data(row.data) parsed_row.add_to_buffer(buff) - if buff.presence: - transaction_queue.send_presence(buff.presence) - for state, destinations in buff.presence_destinations: transaction_queue.send_presence_to_destinations( states=[state], destinations=destinations diff --git a/synapse/federation/sender/__init__.py b/synapse/federation/sender/__init__.py index 952ad39f8c..6266accaf5 100644 --- a/synapse/federation/sender/__init__.py +++ b/synapse/federation/sender/__init__.py @@ -24,8 +24,6 @@ from synapse.events import EventBase from synapse.federation.sender.per_destination_queue import PerDestinationQueue from synapse.federation.sender.transaction_manager import TransactionManager from synapse.federation.units import Edu -from synapse.handlers.presence import get_interested_remotes -from synapse.logging.context import preserve_fn from synapse.metrics import ( LaterGauge, event_processing_loop_counter, @@ -34,7 +32,7 @@ from synapse.metrics import ( ) from synapse.metrics.background_process_metrics import run_as_background_process from synapse.types import Collection, JsonDict, ReadReceipt, RoomStreamToken -from synapse.util.metrics import Measure, measure_func +from synapse.util.metrics import Measure if TYPE_CHECKING: from synapse.events.presence_router import PresenceRouter @@ -79,15 +77,6 @@ class AbstractFederationSender(metaclass=abc.ABCMeta): """ raise NotImplementedError() - @abc.abstractmethod - def send_presence(self, states: List[UserPresenceState]) -> None: - """Send the new presence states to the appropriate destinations. - - This actually queues up the presence states ready for sending and - triggers a background task to process them and send out the transactions. - """ - raise NotImplementedError() - @abc.abstractmethod def send_presence_to_destinations( self, states: Iterable[UserPresenceState], destinations: Iterable[str] @@ -176,11 +165,6 @@ class FederationSender(AbstractFederationSender): ), ) - # Map of user_id -> UserPresenceState for all the pending presence - # to be sent out by user_id. Entries here get processed and put in - # pending_presence_by_dest - self.pending_presence = {} # type: Dict[str, UserPresenceState] - LaterGauge( "synapse_federation_transaction_queue_pending_pdus", "", @@ -201,8 +185,6 @@ class FederationSender(AbstractFederationSender): self._is_processing = False self._last_poked_id = -1 - self._processing_pending_presence = False - # map from room_id to a set of PerDestinationQueues which we believe are # awaiting a call to flush_read_receipts_for_room. The presence of an entry # here for a given room means that we are rate-limiting RR flushes to that room, @@ -546,48 +528,6 @@ class FederationSender(AbstractFederationSender): for queue in queues: queue.flush_read_receipts_for_room(room_id) - @preserve_fn # the caller should not yield on this - async def send_presence(self, states: List[UserPresenceState]) -> None: - """Send the new presence states to the appropriate destinations. - - This actually queues up the presence states ready for sending and - triggers a background task to process them and send out the transactions. - """ - if not self.hs.config.use_presence: - # No-op if presence is disabled. - return - - # First we queue up the new presence by user ID, so multiple presence - # updates in quick succession are correctly handled. - # We only want to send presence for our own users, so lets always just - # filter here just in case. - self.pending_presence.update( - {state.user_id: state for state in states if self.is_mine_id(state.user_id)} - ) - - # We then handle the new pending presence in batches, first figuring - # out the destinations we need to send each state to and then poking it - # to attempt a new transaction. We linearize this so that we don't - # accidentally mess up the ordering and send multiple presence updates - # in the wrong order - if self._processing_pending_presence: - return - - self._processing_pending_presence = True - try: - while True: - states_map = self.pending_presence - self.pending_presence = {} - - if not states_map: - break - - await self._process_presence_inner(list(states_map.values())) - except Exception: - logger.exception("Error sending presence states to servers") - finally: - self._processing_pending_presence = False - def send_presence_to_destinations( self, states: Iterable[UserPresenceState], destinations: Iterable[str] ) -> None: @@ -608,40 +548,6 @@ class FederationSender(AbstractFederationSender): continue self._get_per_destination_queue(destination).send_presence(states) - @measure_func("txnqueue._process_presence") - async def _process_presence_inner(self, states: List[UserPresenceState]) -> None: - """Given a list of states populate self.pending_presence_by_dest and - poke to send a new transaction to each destination - """ - # We pull the presence router here instead of __init__ - # to prevent a dependency cycle: - # - # AuthHandler -> Notifier -> FederationSender - # -> PresenceRouter -> ModuleApi -> AuthHandler - if self._presence_router is None: - self._presence_router = self.hs.get_presence_router() - - assert self._presence_router is not None - - hosts_and_states = await get_interested_remotes( - self.store, - self._presence_router, - states, - self.state, - ) - - for destinations, states in hosts_and_states: - for destination in destinations: - if destination == self.server_name: - continue - - if not self._federation_shard_config.should_handle( - self._instance_name, destination - ): - continue - - self._get_per_destination_queue(destination).send_presence(states) - def build_and_send_edu( self, destination: str, diff --git a/synapse/handlers/presence.py b/synapse/handlers/presence.py index e120dd1f48..6460eb9952 100644 --- a/synapse/handlers/presence.py +++ b/synapse/handlers/presence.py @@ -123,6 +123,14 @@ class BasePresenceHandler(abc.ABC): def __init__(self, hs: "HomeServer"): self.clock = hs.get_clock() self.store = hs.get_datastore() + self.presence_router = hs.get_presence_router() + self.state = hs.get_state_handler() + + self._federation = None + if hs.should_send_federation() or not hs.config.worker_app: + self._federation = hs.get_federation_sender() + + self._send_federation = hs.should_send_federation() self._busy_presence_enabled = hs.config.experimental.msc3026_enabled @@ -249,6 +257,29 @@ class BasePresenceHandler(abc.ABC): """Process presence stream rows received over replication.""" pass + async def maybe_send_presence_to_interested_destinations( + self, states: List[UserPresenceState] + ): + """If this instance is a federation sender, send the states to all + destinations that are interested. + """ + + if not self._send_federation: + return + + # If this worker sends federation we must have a FederationSender. + assert self._federation + + hosts_and_states = await get_interested_remotes( + self.store, + self.presence_router, + states, + self.state, + ) + + for destinations, states in hosts_and_states: + self._federation.send_presence_to_destinations(states, destinations) + class _NullContextManager(ContextManager[None]): """A context manager which does nothing.""" @@ -263,7 +294,6 @@ class WorkerPresenceHandler(BasePresenceHandler): self.hs = hs self.is_mine_id = hs.is_mine_id - self.presence_router = hs.get_presence_router() self._presence_enabled = hs.config.use_presence # The number of ongoing syncs on this process, by user id. @@ -388,6 +418,9 @@ class WorkerPresenceHandler(BasePresenceHandler): users=users_to_states.keys(), ) + # If this is a federation sender, notify about presence updates. + await self.maybe_send_presence_to_interested_destinations(states) + async def process_replication_rows(self, token, rows): states = [ UserPresenceState( @@ -463,9 +496,6 @@ class PresenceHandler(BasePresenceHandler): self.server_name = hs.hostname self.wheel_timer = WheelTimer() self.notifier = hs.get_notifier() - self.federation = hs.get_federation_sender() - self.state = hs.get_state_handler() - self.presence_router = hs.get_presence_router() self._presence_enabled = hs.config.use_presence federation_registry = hs.get_federation_registry() @@ -672,6 +702,13 @@ class PresenceHandler(BasePresenceHandler): self.unpersisted_users_changes |= {s.user_id for s in new_states} self.unpersisted_users_changes -= set(to_notify.keys()) + # Check if we need to resend any presence states to remote hosts. We + # only do this for states that haven't been updated in a while to + # ensure that the remote host doesn't time the presence state out. + # + # Note that since these are states that have *not* been updated, + # they won't get sent down the normal presence replication stream, + # and so we have to explicitly send them via the federation stream. to_federation_ping = { user_id: state for user_id, state in to_federation_ping.items() @@ -680,7 +717,19 @@ class PresenceHandler(BasePresenceHandler): if to_federation_ping: federation_presence_out_counter.inc(len(to_federation_ping)) - self._push_to_remotes(to_federation_ping.values()) + hosts_and_states = await get_interested_remotes( + self.store, + self.presence_router, + list(to_federation_ping.values()), + self.state, + ) + + # Since this is master we know that we have a federation sender or + # queue, and so this will be defined. + assert self._federation + + for destinations, states in hosts_and_states: + self._federation.send_presence_to_destinations(states, destinations) async def _handle_timeouts(self): """Checks the presence of users that have timed out and updates as @@ -920,15 +969,10 @@ class PresenceHandler(BasePresenceHandler): users=[UserID.from_string(u) for u in users_to_states], ) - self._push_to_remotes(states) - - def _push_to_remotes(self, states): - """Sends state updates to remote servers. - - Args: - states (list(UserPresenceState)) - """ - self.federation.send_presence(states) + # We only want to poke the local federation sender, if any, as other + # workers will receive the presence updates via the presence replication + # stream (which is updated by `store.update_presence`). + await self.maybe_send_presence_to_interested_destinations(states) async def incoming_presence(self, origin, content): """Called when we receive a `m.presence` EDU from a remote server.""" @@ -1164,9 +1208,13 @@ class PresenceHandler(BasePresenceHandler): user_presence_states ) + # Since this is master we know that we have a federation sender or + # queue, and so this will be defined. + assert self._federation + # Send out user presence updates for each destination for destination, user_state_set in presence_destinations.items(): - self.federation.send_presence_to_destinations( + self._federation.send_presence_to_destinations( destinations=[destination], states=user_state_set ) diff --git a/synapse/module_api/__init__.py b/synapse/module_api/__init__.py index b7dbbfc27c..a1a2b9aecc 100644 --- a/synapse/module_api/__init__.py +++ b/synapse/module_api/__init__.py @@ -50,6 +50,7 @@ class ModuleApi: self._auth_handler = auth_handler self._server_name = hs.hostname self._presence_stream = hs.get_event_sources().sources["presence"] + self._state = hs.get_state_handler() # We expose these as properties below in order to attach a helpful docstring. self._http_client = hs.get_simple_http_client() # type: SimpleHttpClient @@ -429,11 +430,13 @@ class ModuleApi: UserID.from_string(user), from_key=None, include_offline=False ) - # Send to remote destinations - await make_deferred_yieldable( - # We pull the federation sender here as we can only do so on workers - # that support sending presence - self._hs.get_federation_sender().send_presence(presence_events) + # Send to remote destinations. + + # We pull out the presence handler here to break a cyclic + # dependency between the presence router and module API. + presence_handler = self._hs.get_presence_handler() + await presence_handler.maybe_send_presence_to_interested_destinations( + presence_events ) -- cgit 1.5.1 From e694a598f8c948ad177e897c5bedaa71a47add29 Mon Sep 17 00:00:00 2001 From: Denis Kasak Date: Mon, 19 Apr 2021 16:21:46 +0000 Subject: Sanity check identity server passed to bind/unbind. (#9802) Signed-off-by: Denis Kasak --- changelog.d/9802.bugfix | 1 + synapse/handlers/identity.py | 29 ++++++++++++++++++++++++++--- synapse/util/stringutils.py | 32 ++++++++++++++++++++++++++++++++ 3 files changed, 59 insertions(+), 3 deletions(-) create mode 100644 changelog.d/9802.bugfix diff --git a/changelog.d/9802.bugfix b/changelog.d/9802.bugfix new file mode 100644 index 0000000000..0c72f7be47 --- /dev/null +++ b/changelog.d/9802.bugfix @@ -0,0 +1 @@ +Add some sanity checks to identity server passed to 3PID bind/unbind endpoints. diff --git a/synapse/handlers/identity.py b/synapse/handlers/identity.py index 87a8b89237..0b3b1fadb5 100644 --- a/synapse/handlers/identity.py +++ b/synapse/handlers/identity.py @@ -15,7 +15,6 @@ # limitations under the License. """Utilities for interacting with Identity Servers""" - import logging import urllib.parse from typing import Awaitable, Callable, Dict, List, Optional, Tuple @@ -34,7 +33,11 @@ from synapse.http.site import SynapseRequest from synapse.types import JsonDict, Requester from synapse.util import json_decoder from synapse.util.hash import sha256_and_url_safe_base64 -from synapse.util.stringutils import assert_valid_client_secret, random_string +from synapse.util.stringutils import ( + assert_valid_client_secret, + random_string, + valid_id_server_location, +) from ._base import BaseHandler @@ -172,6 +175,11 @@ class IdentityHandler(BaseHandler): server with, if necessary. Required if use_v2 is true use_v2: Whether to use v2 Identity Service API endpoints. Defaults to True + Raises: + SynapseError: On any of the following conditions + - the supplied id_server is not a valid identity server name + - we failed to contact the supplied identity server + Returns: The response from the identity server """ @@ -181,6 +189,12 @@ class IdentityHandler(BaseHandler): if id_access_token is None: use_v2 = False + if not valid_id_server_location(id_server): + raise SynapseError( + 400, + "id_server must be a valid hostname with optional port and path components", + ) + # Decide which API endpoint URLs to use headers = {} bind_data = {"sid": sid, "client_secret": client_secret, "mxid": mxid} @@ -269,12 +283,21 @@ class IdentityHandler(BaseHandler): id_server: Identity server to unbind from Raises: - SynapseError: If we failed to contact the identity server + SynapseError: On any of the following conditions + - the supplied id_server is not a valid identity server name + - we failed to contact the supplied identity server Returns: True on success, otherwise False if the identity server doesn't support unbinding """ + + if not valid_id_server_location(id_server): + raise SynapseError( + 400, + "id_server must be a valid hostname with optional port and path components", + ) + url = "https://%s/_matrix/identity/api/v1/3pid/unbind" % (id_server,) url_bytes = "/_matrix/identity/api/v1/3pid/unbind".encode("ascii") diff --git a/synapse/util/stringutils.py b/synapse/util/stringutils.py index c0e6fb9a60..cd82777f80 100644 --- a/synapse/util/stringutils.py +++ b/synapse/util/stringutils.py @@ -132,6 +132,38 @@ def parse_and_validate_server_name(server_name: str) -> Tuple[str, Optional[int] return host, port +def valid_id_server_location(id_server: str) -> bool: + """Check whether an identity server location, such as the one passed as the + `id_server` parameter to `/_matrix/client/r0/account/3pid/bind`, is valid. + + A valid identity server location consists of a valid hostname and optional + port number, optionally followed by any number of `/` delimited path + components, without any fragment or query string parts. + + Args: + id_server: identity server location string to validate + + Returns: + True if valid, False otherwise. + """ + + components = id_server.split("/", 1) + + host = components[0] + + try: + parse_and_validate_server_name(host) + except ValueError: + return False + + if len(components) < 2: + # no path + return True + + path = components[1] + return "#" not in path and "?" not in path + + def parse_and_validate_mxc_uri(mxc: str) -> Tuple[str, Optional[int], str]: """Parse the given string as an MXC URI -- cgit 1.5.1 From 71f0623de968f07292d5a092e9197f7513ab6cde Mon Sep 17 00:00:00 2001 From: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com> Date: Mon, 19 Apr 2021 19:16:34 +0100 Subject: Port "Allow users to click account renewal links multiple times without hitting an 'Invalid Token' page #74" from synapse-dinsic (#9832) This attempts to be a direct port of https://github.com/matrix-org/synapse-dinsic/pull/74 to mainline. There was some fiddling required to deal with the changes that have been made to mainline since (mainly dealing with the split of `RegistrationWorkerStore` from `RegistrationStore`, and the changes made to `self.make_request` in test code). --- UPGRADE.rst | 23 +++ changelog.d/9832.feature | 1 + docs/sample_config.yaml | 148 ++++++++++-------- synapse/api/auth.py | 6 +- synapse/config/_base.pyi | 2 + synapse/config/account_validity.py | 165 +++++++++++++++++++++ synapse/config/emailconfig.py | 2 +- synapse/config/homeserver.py | 3 +- synapse/config/registration.py | 129 ---------------- synapse/handlers/account_validity.py | 101 ++++++++++--- synapse/handlers/deactivate_account.py | 4 +- synapse/push/pusherpool.py | 8 +- .../res/templates/account_previously_renewed.html | 1 + synapse/res/templates/account_renewed.html | 2 +- synapse/rest/client/v2_alpha/account_validity.py | 32 +++- synapse/storage/databases/main/registration.py | 62 ++++++-- .../59/12account_validity_token_used_ts_ms.sql | 18 +++ tests/rest/client/v2_alpha/test_register.py | 52 +++++-- 18 files changed, 496 insertions(+), 263 deletions(-) create mode 100644 changelog.d/9832.feature create mode 100644 synapse/config/account_validity.py create mode 100644 synapse/res/templates/account_previously_renewed.html create mode 100644 synapse/storage/databases/main/schema/delta/59/12account_validity_token_used_ts_ms.sql diff --git a/UPGRADE.rst b/UPGRADE.rst index 665821d4ef..eff976017d 100644 --- a/UPGRADE.rst +++ b/UPGRADE.rst @@ -85,6 +85,29 @@ for example: wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb +Upgrading to v1.33.0 +==================== + +Account Validity HTML templates can now display a user's expiration date +------------------------------------------------------------------------ + +This may affect you if you have enabled the account validity feature, and have made use of a +custom HTML template specified by the ``account_validity.template_dir`` or ``account_validity.account_renewed_html_path`` +Synapse config options. + +The template can now accept an ``expiration_ts`` variable, which represents the unix timestamp in milliseconds for the +future date of which their account has been renewed until. See the +`default template `_ +for an example of usage. + +ALso note that a new HTML template, ``account_previously_renewed.html``, has been added. This is is shown to users +when they attempt to renew their account with a valid renewal token that has already been used before. The default +template contents can been found +`here `_, +and can also accept an ``expiration_ts`` variable. This template replaces the error message users would previously see +upon attempting to use a valid renewal token more than once. + + Upgrading to v1.32.0 ==================== diff --git a/changelog.d/9832.feature b/changelog.d/9832.feature new file mode 100644 index 0000000000..e76395fbe8 --- /dev/null +++ b/changelog.d/9832.feature @@ -0,0 +1 @@ +Don't return an error when a user attempts to renew their account multiple times with the same token. Instead, state when their account is set to expire. This change concerns the optional account validity feature. \ No newline at end of file diff --git a/docs/sample_config.yaml b/docs/sample_config.yaml index 9182dcd987..d260d76259 100644 --- a/docs/sample_config.yaml +++ b/docs/sample_config.yaml @@ -1175,69 +1175,6 @@ url_preview_accept_language: # #enable_registration: false -# Optional account validity configuration. This allows for accounts to be denied -# any request after a given period. -# -# Once this feature is enabled, Synapse will look for registered users without an -# expiration date at startup and will add one to every account it found using the -# current settings at that time. -# This means that, if a validity period is set, and Synapse is restarted (it will -# then derive an expiration date from the current validity period), and some time -# after that the validity period changes and Synapse is restarted, the users' -# expiration dates won't be updated unless their account is manually renewed. This -# date will be randomly selected within a range [now + period - d ; now + period], -# where d is equal to 10% of the validity period. -# -account_validity: - # The account validity feature is disabled by default. Uncomment the - # following line to enable it. - # - #enabled: true - - # The period after which an account is valid after its registration. When - # renewing the account, its validity period will be extended by this amount - # of time. This parameter is required when using the account validity - # feature. - # - #period: 6w - - # The amount of time before an account's expiry date at which Synapse will - # send an email to the account's email address with a renewal link. By - # default, no such emails are sent. - # - # If you enable this setting, you will also need to fill out the 'email' and - # 'public_baseurl' configuration sections. - # - #renew_at: 1w - - # The subject of the email sent out with the renewal link. '%(app)s' can be - # used as a placeholder for the 'app_name' parameter from the 'email' - # section. - # - # Note that the placeholder must be written '%(app)s', including the - # trailing 's'. - # - # If this is not set, a default value is used. - # - #renew_email_subject: "Renew your %(app)s account" - - # Directory in which Synapse will try to find templates for the HTML files to - # serve to the user when trying to renew an account. If not set, default - # templates from within the Synapse package will be used. - # - #template_dir: "res/templates" - - # File within 'template_dir' giving the HTML to be displayed to the user after - # they successfully renewed their account. If not set, default text is used. - # - #account_renewed_html_path: "account_renewed.html" - - # File within 'template_dir' giving the HTML to be displayed when the user - # tries to renew an account with an invalid renewal token. If not set, - # default text is used. - # - #invalid_token_html_path: "invalid_token.html" - # Time that a user's session remains valid for, after they log in. # # Note that this is not currently compatible with guest logins. @@ -1432,6 +1369,91 @@ account_threepid_delegates: #auto_join_rooms_for_guests: false +## Account Validity ## + +# Optional account validity configuration. This allows for accounts to be denied +# any request after a given period. +# +# Once this feature is enabled, Synapse will look for registered users without an +# expiration date at startup and will add one to every account it found using the +# current settings at that time. +# This means that, if a validity period is set, and Synapse is restarted (it will +# then derive an expiration date from the current validity period), and some time +# after that the validity period changes and Synapse is restarted, the users' +# expiration dates won't be updated unless their account is manually renewed. This +# date will be randomly selected within a range [now + period - d ; now + period], +# where d is equal to 10% of the validity period. +# +account_validity: + # The account validity feature is disabled by default. Uncomment the + # following line to enable it. + # + #enabled: true + + # The period after which an account is valid after its registration. When + # renewing the account, its validity period will be extended by this amount + # of time. This parameter is required when using the account validity + # feature. + # + #period: 6w + + # The amount of time before an account's expiry date at which Synapse will + # send an email to the account's email address with a renewal link. By + # default, no such emails are sent. + # + # If you enable this setting, you will also need to fill out the 'email' and + # 'public_baseurl' configuration sections. + # + #renew_at: 1w + + # The subject of the email sent out with the renewal link. '%(app)s' can be + # used as a placeholder for the 'app_name' parameter from the 'email' + # section. + # + # Note that the placeholder must be written '%(app)s', including the + # trailing 's'. + # + # If this is not set, a default value is used. + # + #renew_email_subject: "Renew your %(app)s account" + + # Directory in which Synapse will try to find templates for the HTML files to + # serve to the user when trying to renew an account. If not set, default + # templates from within the Synapse package will be used. + # + # The currently available templates are: + # + # * account_renewed.html: Displayed to the user after they have successfully + # renewed their account. + # + # * account_previously_renewed.html: Displayed to the user if they attempt to + # renew their account with a token that is valid, but that has already + # been used. In this case the account is not renewed again. + # + # * invalid_token.html: Displayed to the user when they try to renew an account + # with an unknown or invalid renewal token. + # + # See https://github.com/matrix-org/synapse/tree/master/synapse/res/templates for + # default template contents. + # + # The file name of some of these templates can be configured below for legacy + # reasons. + # + #template_dir: "res/templates" + + # A custom file name for the 'account_renewed.html' template. + # + # If not set, the file is assumed to be named "account_renewed.html". + # + #account_renewed_html_path: "account_renewed.html" + + # A custom file name for the 'invalid_token.html' template. + # + # If not set, the file is assumed to be named "invalid_token.html". + # + #invalid_token_html_path: "invalid_token.html" + + ## Metrics ### # Enable collection and rendering of performance metrics diff --git a/synapse/api/auth.py b/synapse/api/auth.py index 6c13f53957..872fd100cd 100644 --- a/synapse/api/auth.py +++ b/synapse/api/auth.py @@ -79,7 +79,9 @@ class Auth: self._auth_blocking = AuthBlocking(self.hs) - self._account_validity = hs.config.account_validity + self._account_validity_enabled = ( + hs.config.account_validity.account_validity_enabled + ) self._track_appservice_user_ips = hs.config.track_appservice_user_ips self._macaroon_secret_key = hs.config.macaroon_secret_key @@ -222,7 +224,7 @@ class Auth: shadow_banned = user_info.shadow_banned # Deny the request if the user account has expired. - if self._account_validity.enabled and not allow_expired: + if self._account_validity_enabled and not allow_expired: if await self.store.is_account_expired( user_info.user_id, self.clock.time_msec() ): diff --git a/synapse/config/_base.pyi b/synapse/config/_base.pyi index e896fd34e2..ddec356a07 100644 --- a/synapse/config/_base.pyi +++ b/synapse/config/_base.pyi @@ -1,6 +1,7 @@ from typing import Any, Iterable, List, Optional from synapse.config import ( + account_validity, api, appservice, auth, @@ -59,6 +60,7 @@ class RootConfig: captcha: captcha.CaptchaConfig voip: voip.VoipConfig registration: registration.RegistrationConfig + account_validity: account_validity.AccountValidityConfig metrics: metrics.MetricsConfig api: api.ApiConfig appservice: appservice.AppServiceConfig diff --git a/synapse/config/account_validity.py b/synapse/config/account_validity.py new file mode 100644 index 0000000000..c58a7d95a7 --- /dev/null +++ b/synapse/config/account_validity.py @@ -0,0 +1,165 @@ +# -*- coding: utf-8 -*- +# Copyright 2020 The Matrix.org Foundation C.I.C. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from synapse.config._base import Config, ConfigError + + +class AccountValidityConfig(Config): + section = "account_validity" + + def read_config(self, config, **kwargs): + account_validity_config = config.get("account_validity") or {} + self.account_validity_enabled = account_validity_config.get("enabled", False) + self.account_validity_renew_by_email_enabled = ( + "renew_at" in account_validity_config + ) + + if self.account_validity_enabled: + if "period" in account_validity_config: + self.account_validity_period = self.parse_duration( + account_validity_config["period"] + ) + else: + raise ConfigError("'period' is required when using account validity") + + if "renew_at" in account_validity_config: + self.account_validity_renew_at = self.parse_duration( + account_validity_config["renew_at"] + ) + + if "renew_email_subject" in account_validity_config: + self.account_validity_renew_email_subject = account_validity_config[ + "renew_email_subject" + ] + else: + self.account_validity_renew_email_subject = "Renew your %(app)s account" + + self.account_validity_startup_job_max_delta = ( + self.account_validity_period * 10.0 / 100.0 + ) + + if self.account_validity_renew_by_email_enabled: + if not self.public_baseurl: + raise ConfigError("Can't send renewal emails without 'public_baseurl'") + + # Load account validity templates. + account_validity_template_dir = account_validity_config.get("template_dir") + + account_renewed_template_filename = account_validity_config.get( + "account_renewed_html_path", "account_renewed.html" + ) + invalid_token_template_filename = account_validity_config.get( + "invalid_token_html_path", "invalid_token.html" + ) + + # Read and store template content + ( + self.account_validity_account_renewed_template, + self.account_validity_account_previously_renewed_template, + self.account_validity_invalid_token_template, + ) = self.read_templates( + [ + account_renewed_template_filename, + "account_previously_renewed.html", + invalid_token_template_filename, + ], + account_validity_template_dir, + ) + + def generate_config_section(self, **kwargs): + return """\ + ## Account Validity ## + + # Optional account validity configuration. This allows for accounts to be denied + # any request after a given period. + # + # Once this feature is enabled, Synapse will look for registered users without an + # expiration date at startup and will add one to every account it found using the + # current settings at that time. + # This means that, if a validity period is set, and Synapse is restarted (it will + # then derive an expiration date from the current validity period), and some time + # after that the validity period changes and Synapse is restarted, the users' + # expiration dates won't be updated unless their account is manually renewed. This + # date will be randomly selected within a range [now + period - d ; now + period], + # where d is equal to 10% of the validity period. + # + account_validity: + # The account validity feature is disabled by default. Uncomment the + # following line to enable it. + # + #enabled: true + + # The period after which an account is valid after its registration. When + # renewing the account, its validity period will be extended by this amount + # of time. This parameter is required when using the account validity + # feature. + # + #period: 6w + + # The amount of time before an account's expiry date at which Synapse will + # send an email to the account's email address with a renewal link. By + # default, no such emails are sent. + # + # If you enable this setting, you will also need to fill out the 'email' and + # 'public_baseurl' configuration sections. + # + #renew_at: 1w + + # The subject of the email sent out with the renewal link. '%(app)s' can be + # used as a placeholder for the 'app_name' parameter from the 'email' + # section. + # + # Note that the placeholder must be written '%(app)s', including the + # trailing 's'. + # + # If this is not set, a default value is used. + # + #renew_email_subject: "Renew your %(app)s account" + + # Directory in which Synapse will try to find templates for the HTML files to + # serve to the user when trying to renew an account. If not set, default + # templates from within the Synapse package will be used. + # + # The currently available templates are: + # + # * account_renewed.html: Displayed to the user after they have successfully + # renewed their account. + # + # * account_previously_renewed.html: Displayed to the user if they attempt to + # renew their account with a token that is valid, but that has already + # been used. In this case the account is not renewed again. + # + # * invalid_token.html: Displayed to the user when they try to renew an account + # with an unknown or invalid renewal token. + # + # See https://github.com/matrix-org/synapse/tree/master/synapse/res/templates for + # default template contents. + # + # The file name of some of these templates can be configured below for legacy + # reasons. + # + #template_dir: "res/templates" + + # A custom file name for the 'account_renewed.html' template. + # + # If not set, the file is assumed to be named "account_renewed.html". + # + #account_renewed_html_path: "account_renewed.html" + + # A custom file name for the 'invalid_token.html' template. + # + # If not set, the file is assumed to be named "invalid_token.html". + # + #invalid_token_html_path: "invalid_token.html" + """ diff --git a/synapse/config/emailconfig.py b/synapse/config/emailconfig.py index c587939c7a..5564d7d097 100644 --- a/synapse/config/emailconfig.py +++ b/synapse/config/emailconfig.py @@ -299,7 +299,7 @@ class EmailConfig(Config): "client_base_url", email_config.get("riot_base_url", None) ) - if self.account_validity.renew_by_email_enabled: + if self.account_validity_renew_by_email_enabled: expiry_template_html = email_config.get( "expiry_template_html", "notice_expiry.html" ) diff --git a/synapse/config/homeserver.py b/synapse/config/homeserver.py index 1309535068..58e3bcd511 100644 --- a/synapse/config/homeserver.py +++ b/synapse/config/homeserver.py @@ -12,8 +12,8 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - from ._base import RootConfig +from .account_validity import AccountValidityConfig from .api import ApiConfig from .appservice import AppServiceConfig from .auth import AuthConfig @@ -68,6 +68,7 @@ class HomeServerConfig(RootConfig): CaptchaConfig, VoipConfig, RegistrationConfig, + AccountValidityConfig, MetricsConfig, ApiConfig, AppServiceConfig, diff --git a/synapse/config/registration.py b/synapse/config/registration.py index f8a2768af8..e6f52b4f40 100644 --- a/synapse/config/registration.py +++ b/synapse/config/registration.py @@ -12,74 +12,12 @@ # See the License for the specific language governing permissions and # limitations under the License. -import os - -import pkg_resources - from synapse.api.constants import RoomCreationPreset from synapse.config._base import Config, ConfigError from synapse.types import RoomAlias, UserID from synapse.util.stringutils import random_string_with_symbols, strtobool -class AccountValidityConfig(Config): - section = "accountvalidity" - - def __init__(self, config, synapse_config): - if config is None: - return - super().__init__() - self.enabled = config.get("enabled", False) - self.renew_by_email_enabled = "renew_at" in config - - if self.enabled: - if "period" in config: - self.period = self.parse_duration(config["period"]) - else: - raise ConfigError("'period' is required when using account validity") - - if "renew_at" in config: - self.renew_at = self.parse_duration(config["renew_at"]) - - if "renew_email_subject" in config: - self.renew_email_subject = config["renew_email_subject"] - else: - self.renew_email_subject = "Renew your %(app)s account" - - self.startup_job_max_delta = self.period * 10.0 / 100.0 - - if self.renew_by_email_enabled: - if "public_baseurl" not in synapse_config: - raise ConfigError("Can't send renewal emails without 'public_baseurl'") - - template_dir = config.get("template_dir") - - if not template_dir: - template_dir = pkg_resources.resource_filename("synapse", "res/templates") - - if "account_renewed_html_path" in config: - file_path = os.path.join(template_dir, config["account_renewed_html_path"]) - - self.account_renewed_html_content = self.read_file( - file_path, "account_validity.account_renewed_html_path" - ) - else: - self.account_renewed_html_content = ( - "Your account has been successfully renewed." - ) - - if "invalid_token_html_path" in config: - file_path = os.path.join(template_dir, config["invalid_token_html_path"]) - - self.invalid_token_html_content = self.read_file( - file_path, "account_validity.invalid_token_html_path" - ) - else: - self.invalid_token_html_content = ( - "Invalid renewal token." - ) - - class RegistrationConfig(Config): section = "registration" @@ -92,10 +30,6 @@ class RegistrationConfig(Config): str(config["disable_registration"]) ) - self.account_validity = AccountValidityConfig( - config.get("account_validity") or {}, config - ) - self.registrations_require_3pid = config.get("registrations_require_3pid", []) self.allowed_local_3pids = config.get("allowed_local_3pids", []) self.enable_3pid_lookup = config.get("enable_3pid_lookup", True) @@ -207,69 +141,6 @@ class RegistrationConfig(Config): # #enable_registration: false - # Optional account validity configuration. This allows for accounts to be denied - # any request after a given period. - # - # Once this feature is enabled, Synapse will look for registered users without an - # expiration date at startup and will add one to every account it found using the - # current settings at that time. - # This means that, if a validity period is set, and Synapse is restarted (it will - # then derive an expiration date from the current validity period), and some time - # after that the validity period changes and Synapse is restarted, the users' - # expiration dates won't be updated unless their account is manually renewed. This - # date will be randomly selected within a range [now + period - d ; now + period], - # where d is equal to 10%% of the validity period. - # - account_validity: - # The account validity feature is disabled by default. Uncomment the - # following line to enable it. - # - #enabled: true - - # The period after which an account is valid after its registration. When - # renewing the account, its validity period will be extended by this amount - # of time. This parameter is required when using the account validity - # feature. - # - #period: 6w - - # The amount of time before an account's expiry date at which Synapse will - # send an email to the account's email address with a renewal link. By - # default, no such emails are sent. - # - # If you enable this setting, you will also need to fill out the 'email' and - # 'public_baseurl' configuration sections. - # - #renew_at: 1w - - # The subject of the email sent out with the renewal link. '%%(app)s' can be - # used as a placeholder for the 'app_name' parameter from the 'email' - # section. - # - # Note that the placeholder must be written '%%(app)s', including the - # trailing 's'. - # - # If this is not set, a default value is used. - # - #renew_email_subject: "Renew your %%(app)s account" - - # Directory in which Synapse will try to find templates for the HTML files to - # serve to the user when trying to renew an account. If not set, default - # templates from within the Synapse package will be used. - # - #template_dir: "res/templates" - - # File within 'template_dir' giving the HTML to be displayed to the user after - # they successfully renewed their account. If not set, default text is used. - # - #account_renewed_html_path: "account_renewed.html" - - # File within 'template_dir' giving the HTML to be displayed when the user - # tries to renew an account with an invalid renewal token. If not set, - # default text is used. - # - #invalid_token_html_path: "invalid_token.html" - # Time that a user's session remains valid for, after they log in. # # Note that this is not currently compatible with guest logins. diff --git a/synapse/handlers/account_validity.py b/synapse/handlers/account_validity.py index 66ce7e8b83..5b927f10b3 100644 --- a/synapse/handlers/account_validity.py +++ b/synapse/handlers/account_validity.py @@ -17,7 +17,7 @@ import email.utils import logging from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText -from typing import TYPE_CHECKING, List, Optional +from typing import TYPE_CHECKING, List, Optional, Tuple from synapse.api.errors import StoreError, SynapseError from synapse.logging.context import make_deferred_yieldable @@ -39,28 +39,44 @@ class AccountValidityHandler: self.sendmail = self.hs.get_sendmail() self.clock = self.hs.get_clock() - self._account_validity = self.hs.config.account_validity + self._account_validity_enabled = ( + hs.config.account_validity.account_validity_enabled + ) + self._account_validity_renew_by_email_enabled = ( + hs.config.account_validity.account_validity_renew_by_email_enabled + ) + + self._account_validity_period = None + if self._account_validity_enabled: + self._account_validity_period = ( + hs.config.account_validity.account_validity_period + ) if ( - self._account_validity.enabled - and self._account_validity.renew_by_email_enabled + self._account_validity_enabled + and self._account_validity_renew_by_email_enabled ): # Don't do email-specific configuration if renewal by email is disabled. - self._template_html = self.config.account_validity_template_html - self._template_text = self.config.account_validity_template_text + self._template_html = ( + hs.config.account_validity.account_validity_template_html + ) + self._template_text = ( + hs.config.account_validity.account_validity_template_text + ) + account_validity_renew_email_subject = ( + hs.config.account_validity.account_validity_renew_email_subject + ) try: - app_name = self.hs.config.email_app_name + app_name = hs.config.email_app_name - self._subject = self._account_validity.renew_email_subject % { - "app": app_name - } + self._subject = account_validity_renew_email_subject % {"app": app_name} - self._from_string = self.hs.config.email_notif_from % {"app": app_name} + self._from_string = hs.config.email_notif_from % {"app": app_name} except Exception: # If substitution failed, fall back to the bare strings. - self._subject = self._account_validity.renew_email_subject - self._from_string = self.hs.config.email_notif_from + self._subject = account_validity_renew_email_subject + self._from_string = hs.config.email_notif_from self._raw_from = email.utils.parseaddr(self._from_string)[1] @@ -220,50 +236,87 @@ class AccountValidityHandler: attempts += 1 raise StoreError(500, "Couldn't generate a unique string as refresh string.") - async def renew_account(self, renewal_token: str) -> bool: + async def renew_account(self, renewal_token: str) -> Tuple[bool, bool, int]: """Renews the account attached to a given renewal token by pushing back the expiration date by the current validity period in the server's configuration. + If it turns out that the token is valid but has already been used, then the + token is considered stale. A token is stale if the 'token_used_ts_ms' db column + is non-null. + Args: renewal_token: Token sent with the renewal request. Returns: - Whether the provided token is valid. + A tuple containing: + * A bool representing whether the token is valid and unused. + * A bool which is `True` if the token is valid, but stale. + * An int representing the user's expiry timestamp as milliseconds since the + epoch, or 0 if the token was invalid. """ try: - user_id = await self.store.get_user_from_renewal_token(renewal_token) + ( + user_id, + current_expiration_ts, + token_used_ts, + ) = await self.store.get_user_from_renewal_token(renewal_token) except StoreError: - return False + return False, False, 0 + + # Check whether this token has already been used. + if token_used_ts: + logger.info( + "User '%s' attempted to use previously used token '%s' to renew account", + user_id, + renewal_token, + ) + return False, True, current_expiration_ts logger.debug("Renewing an account for user %s", user_id) - await self.renew_account_for_user(user_id) - return True + # Renew the account. Pass the renewal_token here so that it is not cleared. + # We want to keep the token around in case the user attempts to renew their + # account with the same token twice (clicking the email link twice). + # + # In that case, the token will be accepted, but the account's expiration ts + # will remain unchanged. + new_expiration_ts = await self.renew_account_for_user( + user_id, renewal_token=renewal_token + ) + + return True, False, new_expiration_ts async def renew_account_for_user( self, user_id: str, expiration_ts: Optional[int] = None, email_sent: bool = False, + renewal_token: Optional[str] = None, ) -> int: """Renews the account attached to a given user by pushing back the expiration date by the current validity period in the server's configuration. Args: - renewal_token: Token sent with the renewal request. + user_id: The ID of the user to renew. expiration_ts: New expiration date. Defaults to now + validity period. - email_sen: Whether an email has been sent for this validity period. - Defaults to False. + email_sent: Whether an email has been sent for this validity period. + renewal_token: Token sent with the renewal request. The user's token + will be cleared if this is None. Returns: New expiration date for this account, as a timestamp in milliseconds since epoch. """ + now = self.clock.time_msec() if expiration_ts is None: - expiration_ts = self.clock.time_msec() + self._account_validity.period + expiration_ts = now + self._account_validity_period await self.store.set_account_validity_for_user( - user_id=user_id, expiration_ts=expiration_ts, email_sent=email_sent + user_id=user_id, + expiration_ts=expiration_ts, + email_sent=email_sent, + renewal_token=renewal_token, + token_used_ts=now, ) return expiration_ts diff --git a/synapse/handlers/deactivate_account.py b/synapse/handlers/deactivate_account.py index 3f6f9f7f3d..45d2404dde 100644 --- a/synapse/handlers/deactivate_account.py +++ b/synapse/handlers/deactivate_account.py @@ -49,7 +49,9 @@ class DeactivateAccountHandler(BaseHandler): if hs.config.run_background_tasks: hs.get_reactor().callWhenRunning(self._start_user_parting) - self._account_validity_enabled = hs.config.account_validity.enabled + self._account_validity_enabled = ( + hs.config.account_validity.account_validity_enabled + ) async def deactivate_account( self, diff --git a/synapse/push/pusherpool.py b/synapse/push/pusherpool.py index 564a5ed0df..579fcdf472 100644 --- a/synapse/push/pusherpool.py +++ b/synapse/push/pusherpool.py @@ -62,7 +62,9 @@ class PusherPool: self.store = self.hs.get_datastore() self.clock = self.hs.get_clock() - self._account_validity = hs.config.account_validity + self._account_validity_enabled = ( + hs.config.account_validity.account_validity_enabled + ) # We shard the handling of push notifications by user ID. self._pusher_shard_config = hs.config.push.pusher_shard_config @@ -236,7 +238,7 @@ class PusherPool: for u in users_affected: # Don't push if the user account has expired - if self._account_validity.enabled: + if self._account_validity_enabled: expired = await self.store.is_account_expired( u, self.clock.time_msec() ) @@ -266,7 +268,7 @@ class PusherPool: for u in users_affected: # Don't push if the user account has expired - if self._account_validity.enabled: + if self._account_validity_enabled: expired = await self.store.is_account_expired( u, self.clock.time_msec() ) diff --git a/synapse/res/templates/account_previously_renewed.html b/synapse/res/templates/account_previously_renewed.html new file mode 100644 index 0000000000..b751359bdf --- /dev/null +++ b/synapse/res/templates/account_previously_renewed.html @@ -0,0 +1 @@ +Your account is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}. diff --git a/synapse/res/templates/account_renewed.html b/synapse/res/templates/account_renewed.html index 894da030af..e8c0f52f05 100644 --- a/synapse/res/templates/account_renewed.html +++ b/synapse/res/templates/account_renewed.html @@ -1 +1 @@ -Your account has been successfully renewed. +Your account has been successfully renewed and is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}. diff --git a/synapse/rest/client/v2_alpha/account_validity.py b/synapse/rest/client/v2_alpha/account_validity.py index 0ad07fb895..2d1ad3d3fb 100644 --- a/synapse/rest/client/v2_alpha/account_validity.py +++ b/synapse/rest/client/v2_alpha/account_validity.py @@ -36,24 +36,40 @@ class AccountValidityRenewServlet(RestServlet): self.hs = hs self.account_activity_handler = hs.get_account_validity_handler() self.auth = hs.get_auth() - self.success_html = hs.config.account_validity.account_renewed_html_content - self.failure_html = hs.config.account_validity.invalid_token_html_content + self.account_renewed_template = ( + hs.config.account_validity.account_validity_account_renewed_template + ) + self.account_previously_renewed_template = ( + hs.config.account_validity.account_validity_account_previously_renewed_template + ) + self.invalid_token_template = ( + hs.config.account_validity.account_validity_invalid_token_template + ) async def on_GET(self, request): if b"token" not in request.args: raise SynapseError(400, "Missing renewal token") renewal_token = request.args[b"token"][0] - token_valid = await self.account_activity_handler.renew_account( + ( + token_valid, + token_stale, + expiration_ts, + ) = await self.account_activity_handler.renew_account( renewal_token.decode("utf8") ) if token_valid: status_code = 200 - response = self.success_html + response = self.account_renewed_template.render(expiration_ts=expiration_ts) + elif token_stale: + status_code = 200 + response = self.account_previously_renewed_template.render( + expiration_ts=expiration_ts + ) else: status_code = 404 - response = self.failure_html + response = self.invalid_token_template.render(expiration_ts=expiration_ts) respond_with_html(request, status_code, response) @@ -71,10 +87,12 @@ class AccountValiditySendMailServlet(RestServlet): self.hs = hs self.account_activity_handler = hs.get_account_validity_handler() self.auth = hs.get_auth() - self.account_validity = self.hs.config.account_validity + self.account_validity_renew_by_email_enabled = ( + hs.config.account_validity.account_validity_renew_by_email_enabled + ) async def on_POST(self, request): - if not self.account_validity.renew_by_email_enabled: + if not self.account_validity_renew_by_email_enabled: raise AuthError( 403, "Account renewal via email is disabled on this server." ) diff --git a/synapse/storage/databases/main/registration.py b/synapse/storage/databases/main/registration.py index 833214b7e0..6e5ee557d2 100644 --- a/synapse/storage/databases/main/registration.py +++ b/synapse/storage/databases/main/registration.py @@ -91,13 +91,25 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore): id_column=None, ) - self._account_validity = hs.config.account_validity - if hs.config.run_background_tasks and self._account_validity.enabled: - self._clock.call_later( - 0.0, - self._set_expiration_date_when_missing, + self._account_validity_enabled = ( + hs.config.account_validity.account_validity_enabled + ) + self._account_validity_period = None + self._account_validity_startup_job_max_delta = None + if self._account_validity_enabled: + self._account_validity_period = ( + hs.config.account_validity.account_validity_period + ) + self._account_validity_startup_job_max_delta = ( + hs.config.account_validity.account_validity_startup_job_max_delta ) + if hs.config.run_background_tasks: + self._clock.call_later( + 0.0, + self._set_expiration_date_when_missing, + ) + # Create a background job for culling expired 3PID validity tokens if hs.config.run_background_tasks: self._clock.looping_call( @@ -194,6 +206,7 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore): expiration_ts: int, email_sent: bool, renewal_token: Optional[str] = None, + token_used_ts: Optional[int] = None, ) -> None: """Updates the account validity properties of the given account, with the given values. @@ -207,6 +220,8 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore): period. renewal_token: Renewal token the user can use to extend the validity of their account. Defaults to no token. + token_used_ts: A timestamp of when the current token was used to renew + the account. """ def set_account_validity_for_user_txn(txn): @@ -218,6 +233,7 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore): "expiration_ts_ms": expiration_ts, "email_sent": email_sent, "renewal_token": renewal_token, + "token_used_ts_ms": token_used_ts, }, ) self._invalidate_cache_and_stream( @@ -231,7 +247,7 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore): async def set_renewal_token_for_user( self, user_id: str, renewal_token: str ) -> None: - """Defines a renewal token for a given user. + """Defines a renewal token for a given user, and clears the token_used timestamp. Args: user_id: ID of the user to set the renewal token for. @@ -244,26 +260,40 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore): await self.db_pool.simple_update_one( table="account_validity", keyvalues={"user_id": user_id}, - updatevalues={"renewal_token": renewal_token}, + updatevalues={"renewal_token": renewal_token, "token_used_ts_ms": None}, desc="set_renewal_token_for_user", ) - async def get_user_from_renewal_token(self, renewal_token: str) -> str: - """Get a user ID from a renewal token. + async def get_user_from_renewal_token( + self, renewal_token: str + ) -> Tuple[str, int, Optional[int]]: + """Get a user ID and renewal status from a renewal token. Args: renewal_token: The renewal token to perform the lookup with. Returns: - The ID of the user to which the token belongs. + A tuple of containing the following values: + * The ID of a user to which the token belongs. + * An int representing the user's expiry timestamp as milliseconds since the + epoch, or 0 if the token was invalid. + * An optional int representing the timestamp of when the user renewed their + account timestamp as milliseconds since the epoch. None if the account + has not been renewed using the current token yet. """ - return await self.db_pool.simple_select_one_onecol( + ret_dict = await self.db_pool.simple_select_one( table="account_validity", keyvalues={"renewal_token": renewal_token}, - retcol="user_id", + retcols=["user_id", "expiration_ts_ms", "token_used_ts_ms"], desc="get_user_from_renewal_token", ) + return ( + ret_dict["user_id"], + ret_dict["expiration_ts_ms"], + ret_dict["token_used_ts_ms"], + ) + async def get_renewal_token_for_user(self, user_id: str) -> str: """Get the renewal token associated with a given user ID. @@ -302,7 +332,7 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore): "get_users_expiring_soon", select_users_txn, self._clock.time_msec(), - self.config.account_validity.renew_at, + self.config.account_validity_renew_at, ) async def set_renewal_mail_status(self, user_id: str, email_sent: bool) -> None: @@ -964,11 +994,11 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore): delta equal to 10% of the validity period. """ now_ms = self._clock.time_msec() - expiration_ts = now_ms + self._account_validity.period + expiration_ts = now_ms + self._account_validity_period if use_delta: expiration_ts = self.rand.randrange( - expiration_ts - self._account_validity.startup_job_max_delta, + expiration_ts - self._account_validity_startup_job_max_delta, expiration_ts, ) @@ -1412,7 +1442,7 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore): except self.database_engine.module.IntegrityError: raise StoreError(400, "User ID already taken.", errcode=Codes.USER_IN_USE) - if self._account_validity.enabled: + if self._account_validity_enabled: self.set_expiration_date_for_user_txn(txn, user_id) if create_profile_with_displayname: diff --git a/synapse/storage/databases/main/schema/delta/59/12account_validity_token_used_ts_ms.sql b/synapse/storage/databases/main/schema/delta/59/12account_validity_token_used_ts_ms.sql new file mode 100644 index 0000000000..4836dac16e --- /dev/null +++ b/synapse/storage/databases/main/schema/delta/59/12account_validity_token_used_ts_ms.sql @@ -0,0 +1,18 @@ +/* Copyright 2020 The Matrix.org Foundation C.I.C. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +-- Track when users renew their account using the value of the 'renewal_token' column. +-- This field should be set to NULL after a fresh token is generated. +ALTER TABLE account_validity ADD token_used_ts_ms BIGINT; diff --git a/tests/rest/client/v2_alpha/test_register.py b/tests/rest/client/v2_alpha/test_register.py index 054d4e4140..98695b05d5 100644 --- a/tests/rest/client/v2_alpha/test_register.py +++ b/tests/rest/client/v2_alpha/test_register.py @@ -492,8 +492,8 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase): (user_id, tok) = self.create_user() - # Move 6 days forward. This should trigger a renewal email to be sent. - self.reactor.advance(datetime.timedelta(days=6).total_seconds()) + # Move 5 days forward. This should trigger a renewal email to be sent. + self.reactor.advance(datetime.timedelta(days=5).total_seconds()) self.assertEqual(len(self.email_attempts), 1) # Retrieving the URL from the email is too much pain for now, so we @@ -504,14 +504,32 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase): self.assertEquals(channel.result["code"], b"200", channel.result) # Check that we're getting HTML back. - content_type = None - for header in channel.result.get("headers", []): - if header[0] == b"Content-Type": - content_type = header[1] - self.assertEqual(content_type, b"text/html; charset=utf-8", channel.result) + content_type = channel.headers.getRawHeaders(b"Content-Type") + self.assertEqual(content_type, [b"text/html; charset=utf-8"], channel.result) # Check that the HTML we're getting is the one we expect on a successful renewal. - expected_html = self.hs.config.account_validity.account_renewed_html_content + expiration_ts = self.get_success(self.store.get_expiration_ts_for_user(user_id)) + expected_html = self.hs.config.account_validity.account_validity_account_renewed_template.render( + expiration_ts=expiration_ts + ) + self.assertEqual( + channel.result["body"], expected_html.encode("utf8"), channel.result + ) + + # Move 1 day forward. Try to renew with the same token again. + url = "/_matrix/client/unstable/account_validity/renew?token=%s" % renewal_token + channel = self.make_request(b"GET", url) + self.assertEquals(channel.result["code"], b"200", channel.result) + + # Check that we're getting HTML back. + content_type = channel.headers.getRawHeaders(b"Content-Type") + self.assertEqual(content_type, [b"text/html; charset=utf-8"], channel.result) + + # Check that the HTML we're getting is the one we expect when reusing a + # token. The account expiration date should not have changed. + expected_html = self.hs.config.account_validity.account_validity_account_previously_renewed_template.render( + expiration_ts=expiration_ts + ) self.assertEqual( channel.result["body"], expected_html.encode("utf8"), channel.result ) @@ -531,15 +549,14 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase): self.assertEquals(channel.result["code"], b"404", channel.result) # Check that we're getting HTML back. - content_type = None - for header in channel.result.get("headers", []): - if header[0] == b"Content-Type": - content_type = header[1] - self.assertEqual(content_type, b"text/html; charset=utf-8", channel.result) + content_type = channel.headers.getRawHeaders(b"Content-Type") + self.assertEqual(content_type, [b"text/html; charset=utf-8"], channel.result) # Check that the HTML we're getting is the one we expect when using an # invalid/unknown token. - expected_html = self.hs.config.account_validity.invalid_token_html_content + expected_html = ( + self.hs.config.account_validity.account_validity_invalid_token_template.render() + ) self.assertEqual( channel.result["body"], expected_html.encode("utf8"), channel.result ) @@ -647,7 +664,12 @@ class AccountValidityBackgroundJobTestCase(unittest.HomeserverTestCase): config["account_validity"] = {"enabled": False} self.hs = self.setup_test_homeserver(config=config) - self.hs.config.account_validity.period = self.validity_period + + # We need to set these directly, instead of in the homeserver config dict above. + # This is due to account validity-related config options not being read by + # Synapse when account_validity.enabled is False. + self.hs.get_datastore()._account_validity_period = self.validity_period + self.hs.get_datastore()._account_validity_startup_job_max_delta = self.max_delta self.store = self.hs.get_datastore() -- cgit 1.5.1 From 495b214f4f8f45d16ffee851c8ab7a380dd0e2b2 Mon Sep 17 00:00:00 2001 From: Jonathan de Jong Date: Tue, 20 Apr 2021 12:50:49 +0200 Subject: Fix (final) Bugbear violations (#9838) --- changelog.d/9838.misc | 1 + scripts-dev/definitions.py | 2 +- scripts-dev/list_url_patterns.py | 2 +- setup.cfg | 3 +-- synapse/event_auth.py | 2 +- synapse/federation/send_queue.py | 4 ++-- synapse/handlers/auth.py | 2 +- synapse/handlers/device.py | 13 +++++-------- synapse/handlers/federation.py | 2 +- synapse/logging/_remote.py | 4 ++-- synapse/rest/key/v2/remote_key_resource.py | 4 ++-- synapse/storage/databases/main/events.py | 10 +++++----- tests/handlers/test_federation.py | 2 +- tests/replication/tcp/streams/test_events.py | 4 ++-- tests/rest/admin/test_device.py | 4 ++-- tests/rest/admin/test_event_reports.py | 8 ++++---- tests/rest/admin/test_room.py | 8 ++++---- tests/rest/admin/test_statistics.py | 2 +- tests/rest/admin/test_user.py | 4 ++-- tests/rest/client/v1/test_rooms.py | 6 +++--- tests/storage/test_event_metrics.py | 4 ++-- tests/unittest.py | 2 +- tests/utils.py | 2 +- 23 files changed, 46 insertions(+), 49 deletions(-) create mode 100644 changelog.d/9838.misc diff --git a/changelog.d/9838.misc b/changelog.d/9838.misc new file mode 100644 index 0000000000..b98ce56309 --- /dev/null +++ b/changelog.d/9838.misc @@ -0,0 +1 @@ +Introduce flake8-bugbear to the test suite and fix some of its lint violations. \ No newline at end of file diff --git a/scripts-dev/definitions.py b/scripts-dev/definitions.py index 313860df13..c82ddd9677 100755 --- a/scripts-dev/definitions.py +++ b/scripts-dev/definitions.py @@ -140,7 +140,7 @@ if __name__ == "__main__": definitions = {} for directory in args.directories: - for root, dirs, files in os.walk(directory): + for root, _, files in os.walk(directory): for filename in files: if filename.endswith(".py"): filepath = os.path.join(root, filename) diff --git a/scripts-dev/list_url_patterns.py b/scripts-dev/list_url_patterns.py index 26ad7c67f4..e85420dea8 100755 --- a/scripts-dev/list_url_patterns.py +++ b/scripts-dev/list_url_patterns.py @@ -48,7 +48,7 @@ args = parser.parse_args() for directory in args.directories: - for root, dirs, files in os.walk(directory): + for root, _, files in os.walk(directory): for filename in files: if filename.endswith(".py"): filepath = os.path.join(root, filename) diff --git a/setup.cfg b/setup.cfg index 33601b71d5..e5ceb7ed19 100644 --- a/setup.cfg +++ b/setup.cfg @@ -18,8 +18,7 @@ ignore = # E203: whitespace before ':' (which is contrary to pep8?) # E731: do not assign a lambda expression, use a def # E501: Line too long (black enforces this for us) -# B007: Subsection of the bugbear suite (TODO: add in remaining fixes) -ignore=W503,W504,E203,E731,E501,B007 +ignore=W503,W504,E203,E731,E501 [isort] line_length = 88 diff --git a/synapse/event_auth.py b/synapse/event_auth.py index 5234e3f81e..c831d9f73c 100644 --- a/synapse/event_auth.py +++ b/synapse/event_auth.py @@ -670,7 +670,7 @@ def _verify_third_party_invite(event: EventBase, auth_events: StateMap[EventBase public_key = public_key_object["public_key"] try: for server, signature_block in signed["signatures"].items(): - for key_name, encoded_signature in signature_block.items(): + for key_name in signature_block.keys(): if not key_name.startswith("ed25519:"): continue verify_key = decode_verify_key_bytes( diff --git a/synapse/federation/send_queue.py b/synapse/federation/send_queue.py index d71f04e43e..65d76ea974 100644 --- a/synapse/federation/send_queue.py +++ b/synapse/federation/send_queue.py @@ -501,10 +501,10 @@ def process_rows_for_federation( states=[state], destinations=destinations ) - for destination, edu_map in buff.keyed_edus.items(): + for edu_map in buff.keyed_edus.values(): for key, edu in edu_map.items(): transaction_queue.send_edu(edu, key) - for destination, edu_list in buff.edus.items(): + for edu_list in buff.edus.values(): for edu in edu_list: transaction_queue.send_edu(edu, None) diff --git a/synapse/handlers/auth.py b/synapse/handlers/auth.py index b8a37b6477..36f2450e2e 100644 --- a/synapse/handlers/auth.py +++ b/synapse/handlers/auth.py @@ -1248,7 +1248,7 @@ class AuthHandler(BaseHandler): # see if any of our auth providers want to know about this for provider in self.password_providers: - for token, token_id, device_id in tokens_and_devices: + for token, _, device_id in tokens_and_devices: await provider.on_logged_out( user_id=user_id, device_id=device_id, access_token=token ) diff --git a/synapse/handlers/device.py b/synapse/handlers/device.py index d75edb184b..c1d7800981 100644 --- a/synapse/handlers/device.py +++ b/synapse/handlers/device.py @@ -156,8 +156,7 @@ class DeviceWorkerHandler(BaseHandler): # The user may have left the room # TODO: Check if they actually did or if we were just invited. if room_id not in room_ids: - for key, event_id in current_state_ids.items(): - etype, state_key = key + for etype, state_key in current_state_ids.keys(): if etype != EventTypes.Member: continue possibly_left.add(state_key) @@ -179,8 +178,7 @@ class DeviceWorkerHandler(BaseHandler): log_kv( {"event": "encountered empty previous state", "room_id": room_id} ) - for key, event_id in current_state_ids.items(): - etype, state_key = key + for etype, state_key in current_state_ids.keys(): if etype != EventTypes.Member: continue possibly_changed.add(state_key) @@ -198,8 +196,7 @@ class DeviceWorkerHandler(BaseHandler): for state_dict in prev_state_ids.values(): member_event = state_dict.get((EventTypes.Member, user_id), None) if not member_event or member_event != current_member_id: - for key, event_id in current_state_ids.items(): - etype, state_key = key + for etype, state_key in current_state_ids.keys(): if etype != EventTypes.Member: continue possibly_changed.add(state_key) @@ -714,7 +711,7 @@ class DeviceListUpdater: # This can happen since we batch updates return - for device_id, stream_id, prev_ids, content in pending_updates: + for device_id, stream_id, prev_ids, _ in pending_updates: logger.debug( "Handling update %r/%r, ID: %r, prev: %r ", user_id, @@ -740,7 +737,7 @@ class DeviceListUpdater: else: # Simply update the single device, since we know that is the only # change (because of the single prev_id matching the current cache) - for device_id, stream_id, prev_ids, content in pending_updates: + for device_id, stream_id, _, content in pending_updates: await self.store.update_remote_device_list_cache_entry( user_id, device_id, content, stream_id ) diff --git a/synapse/handlers/federation.py b/synapse/handlers/federation.py index 4b3730aa3b..dbdd7d2db3 100644 --- a/synapse/handlers/federation.py +++ b/synapse/handlers/federation.py @@ -2956,7 +2956,7 @@ class FederationHandler(BaseHandler): try: # for each sig on the third_party_invite block of the actual invite for server, signature_block in signed["signatures"].items(): - for key_name, encoded_signature in signature_block.items(): + for key_name in signature_block.keys(): if not key_name.startswith("ed25519:"): continue diff --git a/synapse/logging/_remote.py b/synapse/logging/_remote.py index 4e8b0f8d10..c515690b38 100644 --- a/synapse/logging/_remote.py +++ b/synapse/logging/_remote.py @@ -226,11 +226,11 @@ class RemoteHandler(logging.Handler): old_buffer = self._buffer self._buffer = deque() - for i in range(buffer_split): + for _ in range(buffer_split): self._buffer.append(old_buffer.popleft()) end_buffer = [] - for i in range(buffer_split): + for _ in range(buffer_split): end_buffer.append(old_buffer.pop()) self._buffer.extend(reversed(end_buffer)) diff --git a/synapse/rest/key/v2/remote_key_resource.py b/synapse/rest/key/v2/remote_key_resource.py index c57ac22e58..f648678b09 100644 --- a/synapse/rest/key/v2/remote_key_resource.py +++ b/synapse/rest/key/v2/remote_key_resource.py @@ -144,7 +144,7 @@ class RemoteKey(DirectServeJsonResource): # Note that the value is unused. cache_misses = {} # type: Dict[str, Dict[str, int]] - for (server_name, key_id, from_server), results in cached.items(): + for (server_name, key_id, _), results in cached.items(): results = [(result["ts_added_ms"], result) for result in results] if not results and key_id is not None: @@ -206,7 +206,7 @@ class RemoteKey(DirectServeJsonResource): # Cast to bytes since postgresql returns a memoryview. json_results.add(bytes(most_recent_result["key_json"])) else: - for ts_added, result in results: + for _, result in results: # Cast to bytes since postgresql returns a memoryview. json_results.add(bytes(result["key_json"])) diff --git a/synapse/storage/databases/main/events.py b/synapse/storage/databases/main/events.py index a362521e20..fd25c8112d 100644 --- a/synapse/storage/databases/main/events.py +++ b/synapse/storage/databases/main/events.py @@ -170,7 +170,7 @@ class PersistEventsStore: ) async with stream_ordering_manager as stream_orderings: - for (event, context), stream in zip(events_and_contexts, stream_orderings): + for (event, _), stream in zip(events_and_contexts, stream_orderings): event.internal_metadata.stream_ordering = stream await self.db_pool.runInteraction( @@ -297,7 +297,7 @@ class PersistEventsStore: txn.execute(sql + clause, args) to_recursively_check = [] - for event_id, prev_event_id, metadata, rejected in txn: + for _, prev_event_id, metadata, rejected in txn: if prev_event_id in existing_prevs: continue @@ -1127,7 +1127,7 @@ class PersistEventsStore: def _update_forward_extremities_txn( self, txn, new_forward_extremities, max_stream_order ): - for room_id, new_extrem in new_forward_extremities.items(): + for room_id in new_forward_extremities.keys(): self.db_pool.simple_delete_txn( txn, table="event_forward_extremities", keyvalues={"room_id": room_id} ) @@ -1399,7 +1399,7 @@ class PersistEventsStore: ] state_values = [] - for event, context in state_events_and_contexts: + for event, _ in state_events_and_contexts: vals = { "event_id": event.event_id, "room_id": event.room_id, @@ -1468,7 +1468,7 @@ class PersistEventsStore: # nothing to do here return - for event, context in events_and_contexts: + for event, _ in events_and_contexts: if event.type == EventTypes.Redaction and event.redacts is not None: # Remove the entries in the event_push_actions table for the # redacted event. diff --git a/tests/handlers/test_federation.py b/tests/handlers/test_federation.py index c7b0975a19..8796af45ed 100644 --- a/tests/handlers/test_federation.py +++ b/tests/handlers/test_federation.py @@ -222,7 +222,7 @@ class FederationTestCase(unittest.HomeserverTestCase): room_version, ) - for i in range(3): + for _ in range(3): event = create_invite() self.get_success( self.handler.on_invite_request( diff --git a/tests/replication/tcp/streams/test_events.py b/tests/replication/tcp/streams/test_events.py index 323237c1bb..f51fa0a79e 100644 --- a/tests/replication/tcp/streams/test_events.py +++ b/tests/replication/tcp/streams/test_events.py @@ -239,7 +239,7 @@ class EventsStreamTestCase(BaseStreamTestCase): # the state rows are unsorted state_rows = [] # type: List[EventsStreamCurrentStateRow] - for stream_name, token, row in received_rows: + for stream_name, _, row in received_rows: self.assertEqual("events", stream_name) self.assertIsInstance(row, EventsStreamRow) self.assertEqual(row.type, "state") @@ -356,7 +356,7 @@ class EventsStreamTestCase(BaseStreamTestCase): # the state rows are unsorted state_rows = [] # type: List[EventsStreamCurrentStateRow] - for j in range(STATES_PER_USER + 1): + for _ in range(STATES_PER_USER + 1): stream_name, token, row = received_rows.pop(0) self.assertEqual("events", stream_name) self.assertIsInstance(row, EventsStreamRow) diff --git a/tests/rest/admin/test_device.py b/tests/rest/admin/test_device.py index ecbee30bb5..120730b764 100644 --- a/tests/rest/admin/test_device.py +++ b/tests/rest/admin/test_device.py @@ -430,7 +430,7 @@ class DevicesRestTestCase(unittest.HomeserverTestCase): """ # Create devices number_devices = 5 - for n in range(number_devices): + for _ in range(number_devices): self.login("user", "pass") # Get devices @@ -547,7 +547,7 @@ class DeleteDevicesRestTestCase(unittest.HomeserverTestCase): # Create devices number_devices = 5 - for n in range(number_devices): + for _ in range(number_devices): self.login("user", "pass") # Get devices diff --git a/tests/rest/admin/test_event_reports.py b/tests/rest/admin/test_event_reports.py index 8c66da3af4..29341bc6e9 100644 --- a/tests/rest/admin/test_event_reports.py +++ b/tests/rest/admin/test_event_reports.py @@ -48,22 +48,22 @@ class EventReportsTestCase(unittest.HomeserverTestCase): self.helper.join(self.room_id2, user=self.admin_user, tok=self.admin_user_tok) # Two rooms and two users. Every user sends and reports every room event - for i in range(5): + for _ in range(5): self._create_event_and_report( room_id=self.room_id1, user_tok=self.other_user_tok, ) - for i in range(5): + for _ in range(5): self._create_event_and_report( room_id=self.room_id2, user_tok=self.other_user_tok, ) - for i in range(5): + for _ in range(5): self._create_event_and_report( room_id=self.room_id1, user_tok=self.admin_user_tok, ) - for i in range(5): + for _ in range(5): self._create_event_and_report( room_id=self.room_id2, user_tok=self.admin_user_tok, diff --git a/tests/rest/admin/test_room.py b/tests/rest/admin/test_room.py index 6bcd997085..6b84188120 100644 --- a/tests/rest/admin/test_room.py +++ b/tests/rest/admin/test_room.py @@ -615,7 +615,7 @@ class RoomTestCase(unittest.HomeserverTestCase): # Create 3 test rooms total_rooms = 3 room_ids = [] - for x in range(total_rooms): + for _ in range(total_rooms): room_id = self.helper.create_room_as( self.admin_user, tok=self.admin_user_tok ) @@ -679,7 +679,7 @@ class RoomTestCase(unittest.HomeserverTestCase): # Create 5 test rooms total_rooms = 5 room_ids = [] - for x in range(total_rooms): + for _ in range(total_rooms): room_id = self.helper.create_room_as( self.admin_user, tok=self.admin_user_tok ) @@ -1577,7 +1577,7 @@ class JoinAliasRoomTestCase(unittest.HomeserverTestCase): channel.json_body["event"]["event_id"], events[midway]["event_id"] ) - for i, found_event in enumerate(channel.json_body["events_before"]): + for found_event in channel.json_body["events_before"]: for j, posted_event in enumerate(events): if found_event["event_id"] == posted_event["event_id"]: self.assertTrue(j < midway) @@ -1585,7 +1585,7 @@ class JoinAliasRoomTestCase(unittest.HomeserverTestCase): else: self.fail("Event %s from events_before not found" % j) - for i, found_event in enumerate(channel.json_body["events_after"]): + for found_event in channel.json_body["events_after"]: for j, posted_event in enumerate(events): if found_event["event_id"] == posted_event["event_id"]: self.assertTrue(j > midway) diff --git a/tests/rest/admin/test_statistics.py b/tests/rest/admin/test_statistics.py index 363bdeeb2d..79cac4266b 100644 --- a/tests/rest/admin/test_statistics.py +++ b/tests/rest/admin/test_statistics.py @@ -467,7 +467,7 @@ class UserMediaStatisticsTestCase(unittest.HomeserverTestCase): number_media: Number of media to be created for the user """ upload_resource = self.media_repo.children[b"upload"] - for i in range(number_media): + for _ in range(number_media): # file size is 67 Byte image_data = unhexlify( b"89504e470d0a1a0a0000000d4948445200000001000000010806" diff --git a/tests/rest/admin/test_user.py b/tests/rest/admin/test_user.py index 2844c493fc..b3afd51522 100644 --- a/tests/rest/admin/test_user.py +++ b/tests/rest/admin/test_user.py @@ -1937,7 +1937,7 @@ class UserMembershipRestTestCase(unittest.HomeserverTestCase): # Create rooms and join other_user_tok = self.login("user", "pass") number_rooms = 5 - for n in range(number_rooms): + for _ in range(number_rooms): self.helper.create_room_as(self.other_user, tok=other_user_tok) # Get rooms @@ -2517,7 +2517,7 @@ class UserMediaRestTestCase(unittest.HomeserverTestCase): user_token: Access token of the user number_media: Number of media to be created for the user """ - for i in range(number_media): + for _ in range(number_media): # file size is 67 Byte image_data = unhexlify( b"89504e470d0a1a0a0000000d4948445200000001000000010806" diff --git a/tests/rest/client/v1/test_rooms.py b/tests/rest/client/v1/test_rooms.py index 92babf65e0..a3694f3d02 100644 --- a/tests/rest/client/v1/test_rooms.py +++ b/tests/rest/client/v1/test_rooms.py @@ -646,7 +646,7 @@ class RoomInviteRatelimitTestCase(RoomBase): def test_invites_by_users_ratelimit(self): """Tests that invites to a specific user are actually rate-limited.""" - for i in range(3): + for _ in range(3): room_id = self.helper.create_room_as(self.user_id) self.helper.invite(room_id, self.user_id, "@other-users:red") @@ -668,7 +668,7 @@ class RoomJoinRatelimitTestCase(RoomBase): ) def test_join_local_ratelimit(self): """Tests that local joins are actually rate-limited.""" - for i in range(3): + for _ in range(3): self.helper.create_room_as(self.user_id) self.helper.create_room_as(self.user_id, expect_code=429) @@ -733,7 +733,7 @@ class RoomJoinRatelimitTestCase(RoomBase): for path in paths_to_test: # Make sure we send more requests than the rate-limiting config would allow # if all of these requests ended up joining the user to a room. - for i in range(4): + for _ in range(4): channel = self.make_request("POST", path % room_id, {}) self.assertEquals(channel.code, 200) diff --git a/tests/storage/test_event_metrics.py b/tests/storage/test_event_metrics.py index 397e68fe0a..088fbb247b 100644 --- a/tests/storage/test_event_metrics.py +++ b/tests/storage/test_event_metrics.py @@ -38,12 +38,12 @@ class ExtremStatisticsTestCase(HomeserverTestCase): last_event = None # Make a real event chain - for i in range(event_count): + for _ in range(event_count): ev = self.create_and_send_event(room_id, user, False, last_event) last_event = [ev] # Sprinkle in some extremities - for i in range(extrems): + for _ in range(extrems): ev = self.create_and_send_event(room_id, user, False, last_event) # Let it run for a while, then pull out the statistics from the diff --git a/tests/unittest.py b/tests/unittest.py index d890ad981f..ee22a53849 100644 --- a/tests/unittest.py +++ b/tests/unittest.py @@ -133,7 +133,7 @@ class TestCase(unittest.TestCase): def assertObjectHasAttributes(self, attrs, obj): """Asserts that the given object has each of the attributes given, and that the value of each matches according to assertEquals.""" - for (key, value) in attrs.items(): + for key in attrs.keys(): if not hasattr(obj, key): raise AssertionError("Expected obj to have a '.%s'" % key) try: diff --git a/tests/utils.py b/tests/utils.py index af6b32fc66..63d52b9140 100644 --- a/tests/utils.py +++ b/tests/utils.py @@ -303,7 +303,7 @@ def setup_test_homeserver( # database for a few more seconds due to flakiness, preventing # us from dropping it when the test is over. If we can't drop # it, warn and move on. - for x in range(5): + for _ in range(5): try: cur.execute("DROP DATABASE IF EXISTS %s;" % (test_db,)) db_conn.commit() -- cgit 1.5.1 From db70435de740b534936df75c435290a37dcc015f Mon Sep 17 00:00:00 2001 From: Erik Johnston Date: Tue, 20 Apr 2021 13:37:54 +0100 Subject: Fix bug where we sent remote presence states to remote servers (#9850) --- changelog.d/9850.feature | 1 + synapse/federation/sender/__init__.py | 4 ++++ synapse/handlers/presence.py | 11 ++++++++--- 3 files changed, 13 insertions(+), 3 deletions(-) create mode 100644 changelog.d/9850.feature diff --git a/changelog.d/9850.feature b/changelog.d/9850.feature new file mode 100644 index 0000000000..f56b0bb3bd --- /dev/null +++ b/changelog.d/9850.feature @@ -0,0 +1 @@ +Add experimental support for handling presence on a worker. diff --git a/synapse/federation/sender/__init__.py b/synapse/federation/sender/__init__.py index 6266accaf5..b00a55324c 100644 --- a/synapse/federation/sender/__init__.py +++ b/synapse/federation/sender/__init__.py @@ -539,6 +539,10 @@ class FederationSender(AbstractFederationSender): # No-op if presence is disabled. return + # Ensure we only send out presence states for local users. + for state in states: + assert self.is_mine_id(state.user_id) + for destination in destinations: if destination == self.server_name: continue diff --git a/synapse/handlers/presence.py b/synapse/handlers/presence.py index 6460eb9952..bd2382193f 100644 --- a/synapse/handlers/presence.py +++ b/synapse/handlers/presence.py @@ -125,6 +125,7 @@ class BasePresenceHandler(abc.ABC): self.store = hs.get_datastore() self.presence_router = hs.get_presence_router() self.state = hs.get_state_handler() + self.is_mine_id = hs.is_mine_id self._federation = None if hs.should_send_federation() or not hs.config.worker_app: @@ -261,7 +262,8 @@ class BasePresenceHandler(abc.ABC): self, states: List[UserPresenceState] ): """If this instance is a federation sender, send the states to all - destinations that are interested. + destinations that are interested. Filters out any states for remote + users. """ if not self._send_federation: @@ -270,6 +272,11 @@ class BasePresenceHandler(abc.ABC): # If this worker sends federation we must have a FederationSender. assert self._federation + states = [s for s in states if self.is_mine_id(s.user_id)] + + if not states: + return + hosts_and_states = await get_interested_remotes( self.store, self.presence_router, @@ -292,7 +299,6 @@ class WorkerPresenceHandler(BasePresenceHandler): def __init__(self, hs): super().__init__(hs) self.hs = hs - self.is_mine_id = hs.is_mine_id self._presence_enabled = hs.config.use_presence @@ -492,7 +498,6 @@ class PresenceHandler(BasePresenceHandler): def __init__(self, hs: "HomeServer"): super().__init__(hs) self.hs = hs - self.is_mine_id = hs.is_mine_id self.server_name = hs.hostname self.wheel_timer = WheelTimer() self.notifier = hs.get_notifier() -- cgit 1.5.1 From de0d088adc0cf3d5bbd80238b88143426cd6eaca Mon Sep 17 00:00:00 2001 From: Erik Johnston Date: Tue, 20 Apr 2021 14:11:24 +0100 Subject: Add presence federation stream (#9819) --- changelog.d/9819.feature | 1 + synapse/handlers/presence.py | 243 +++++++++++++++++++++++++--- synapse/replication/tcp/client.py | 7 +- synapse/replication/tcp/streams/__init__.py | 3 + synapse/replication/tcp/streams/_base.py | 24 +++ tests/handlers/test_presence.py | 179 +++++++++++++++++++- 6 files changed, 426 insertions(+), 31 deletions(-) create mode 100644 changelog.d/9819.feature diff --git a/changelog.d/9819.feature b/changelog.d/9819.feature new file mode 100644 index 0000000000..f56b0bb3bd --- /dev/null +++ b/changelog.d/9819.feature @@ -0,0 +1 @@ +Add experimental support for handling presence on a worker. diff --git a/synapse/handlers/presence.py b/synapse/handlers/presence.py index bd2382193f..598466c9bd 100644 --- a/synapse/handlers/presence.py +++ b/synapse/handlers/presence.py @@ -24,6 +24,7 @@ The methods that define policy are: import abc import contextlib import logging +from bisect import bisect from contextlib import contextmanager from typing import ( TYPE_CHECKING, @@ -53,7 +54,9 @@ from synapse.replication.http.presence import ( ReplicationBumpPresenceActiveTime, ReplicationPresenceSetState, ) +from synapse.replication.http.streams import ReplicationGetStreamUpdates from synapse.replication.tcp.commands import ClearUserSyncsCommand +from synapse.replication.tcp.streams import PresenceFederationStream, PresenceStream from synapse.state import StateHandler from synapse.storage.databases.main import DataStore from synapse.types import Collection, JsonDict, UserID, get_domain_from_id @@ -128,10 +131,10 @@ class BasePresenceHandler(abc.ABC): self.is_mine_id = hs.is_mine_id self._federation = None - if hs.should_send_federation() or not hs.config.worker_app: + if hs.should_send_federation(): self._federation = hs.get_federation_sender() - self._send_federation = hs.should_send_federation() + self._federation_queue = PresenceFederationQueue(hs, self) self._busy_presence_enabled = hs.config.experimental.msc3026_enabled @@ -254,9 +257,17 @@ class BasePresenceHandler(abc.ABC): """ pass - async def process_replication_rows(self, token, rows): - """Process presence stream rows received over replication.""" - pass + async def process_replication_rows( + self, stream_name: str, instance_name: str, token: int, rows: list + ): + """Process streams received over replication.""" + await self._federation_queue.process_replication_rows( + stream_name, instance_name, token, rows + ) + + def get_federation_queue(self) -> "PresenceFederationQueue": + """Get the presence federation queue.""" + return self._federation_queue async def maybe_send_presence_to_interested_destinations( self, states: List[UserPresenceState] @@ -266,12 +277,9 @@ class BasePresenceHandler(abc.ABC): users. """ - if not self._send_federation: + if not self._federation: return - # If this worker sends federation we must have a FederationSender. - assert self._federation - states = [s for s in states if self.is_mine_id(s.user_id)] if not states: @@ -427,7 +435,14 @@ class WorkerPresenceHandler(BasePresenceHandler): # If this is a federation sender, notify about presence updates. await self.maybe_send_presence_to_interested_destinations(states) - async def process_replication_rows(self, token, rows): + async def process_replication_rows( + self, stream_name: str, instance_name: str, token: int, rows: list + ): + await super().process_replication_rows(stream_name, instance_name, token, rows) + + if stream_name != PresenceStream.NAME: + return + states = [ UserPresenceState( row.user_id, @@ -729,12 +744,10 @@ class PresenceHandler(BasePresenceHandler): self.state, ) - # Since this is master we know that we have a federation sender or - # queue, and so this will be defined. - assert self._federation - for destinations, states in hosts_and_states: - self._federation.send_presence_to_destinations(states, destinations) + self._federation_queue.send_presence_to_destinations( + states, destinations + ) async def _handle_timeouts(self): """Checks the presence of users that have timed out and updates as @@ -1213,13 +1226,9 @@ class PresenceHandler(BasePresenceHandler): user_presence_states ) - # Since this is master we know that we have a federation sender or - # queue, and so this will be defined. - assert self._federation - # Send out user presence updates for each destination for destination, user_state_set in presence_destinations.items(): - self._federation.send_presence_to_destinations( + self._federation_queue.send_presence_to_destinations( destinations=[destination], states=user_state_set ) @@ -1864,3 +1873,197 @@ async def get_interested_remotes( hosts_and_states.append(([host], states)) return hosts_and_states + + +class PresenceFederationQueue: + """Handles sending ad hoc presence updates over federation, which are *not* + due to state updates (that get handled via the presence stream), e.g. + federation pings and sending existing present states to newly joined hosts. + + Only the last N minutes will be queued, so if a federation sender instance + is down for longer then some updates will be dropped. This is OK as presence + is ephemeral, and so it will self correct eventually. + + On workers the class tracks the last received position of the stream from + replication, and handles querying for missed updates over HTTP replication, + c.f. `get_current_token` and `get_replication_rows`. + """ + + # How long to keep entries in the queue for. Workers that are down for + # longer than this duration will miss out on older updates. + _KEEP_ITEMS_IN_QUEUE_FOR_MS = 5 * 60 * 1000 + + # How often to check if we can expire entries from the queue. + _CLEAR_ITEMS_EVERY_MS = 60 * 1000 + + def __init__(self, hs: "HomeServer", presence_handler: BasePresenceHandler): + self._clock = hs.get_clock() + self._notifier = hs.get_notifier() + self._instance_name = hs.get_instance_name() + self._presence_handler = presence_handler + self._repl_client = ReplicationGetStreamUpdates.make_client(hs) + + # Should we keep a queue of recent presence updates? We only bother if + # another process may be handling federation sending. + self._queue_presence_updates = True + + # Whether this instance is a presence writer. + self._presence_writer = hs.config.worker.worker_app is None + + # The FederationSender instance, if this process sends federation traffic directly. + self._federation = None + + if hs.should_send_federation(): + self._federation = hs.get_federation_sender() + + # We don't bother queuing up presence states if only this instance + # is sending federation. + if hs.config.worker.federation_shard_config.instances == [ + self._instance_name + ]: + self._queue_presence_updates = False + + # The queue of recently queued updates as tuples of: `(timestamp, + # stream_id, destinations, user_ids)`. We don't store the full states + # for efficiency, and remote workers will already have the full states + # cached. + self._queue = [] # type: List[Tuple[int, int, Collection[str], Set[str]]] + + self._next_id = 1 + + # Map from instance name to current token + self._current_tokens = {} # type: Dict[str, int] + + if self._queue_presence_updates: + self._clock.looping_call(self._clear_queue, self._CLEAR_ITEMS_EVERY_MS) + + def _clear_queue(self): + """Clear out older entries from the queue.""" + clear_before = self._clock.time_msec() - self._KEEP_ITEMS_IN_QUEUE_FOR_MS + + # The queue is sorted by timestamp, so we can bisect to find the right + # place to purge before. Note that we are searching using a 1-tuple with + # the time, which does The Right Thing since the queue is a tuple where + # the first item is a timestamp. + index = bisect(self._queue, (clear_before,)) + self._queue = self._queue[index:] + + def send_presence_to_destinations( + self, states: Collection[UserPresenceState], destinations: Collection[str] + ) -> None: + """Send the presence states to the given destinations. + + Will forward to the local federation sender (if there is one) and queue + to send over replication (if there are other federation sender instances.). + + Must only be called on the master process. + """ + + # This should only be called on a presence writer. + assert self._presence_writer + + if self._federation: + self._federation.send_presence_to_destinations( + states=states, + destinations=destinations, + ) + + if not self._queue_presence_updates: + return + + now = self._clock.time_msec() + + stream_id = self._next_id + self._next_id += 1 + + self._queue.append((now, stream_id, destinations, {s.user_id for s in states})) + + self._notifier.notify_replication() + + def get_current_token(self, instance_name: str) -> int: + """Get the current position of the stream. + + On workers this returns the last stream ID received from replication. + """ + if instance_name == self._instance_name: + return self._next_id - 1 + else: + return self._current_tokens.get(instance_name, 0) + + async def get_replication_rows( + self, + instance_name: str, + from_token: int, + upto_token: int, + target_row_count: int, + ) -> Tuple[List[Tuple[int, Tuple[str, str]]], int, bool]: + """Get all the updates between the two tokens. + + We return rows in the form of `(destination, user_id)` to keep the size + of each row bounded (rather than returning the sets in a row). + + On workers this will query the master process via HTTP replication. + """ + if instance_name != self._instance_name: + # If not local we query over http replication from the master + result = await self._repl_client( + instance_name=instance_name, + stream_name=PresenceFederationStream.NAME, + from_token=from_token, + upto_token=upto_token, + ) + return result["updates"], result["upto_token"], result["limited"] + + # We can find the correct position in the queue by noting that there is + # exactly one entry per stream ID, and that the last entry has an ID of + # `self._next_id - 1`, so we can count backwards from the end. + # + # Since the start of the queue is periodically truncated we need to + # handle the case where `from_token` stream ID has already been dropped. + start_idx = max(from_token - self._next_id, -len(self._queue)) + + to_send = [] # type: List[Tuple[int, Tuple[str, str]]] + limited = False + new_id = upto_token + for _, stream_id, destinations, user_ids in self._queue[start_idx:]: + if stream_id > upto_token: + break + + new_id = stream_id + + to_send.extend( + (stream_id, (destination, user_id)) + for destination in destinations + for user_id in user_ids + ) + + if len(to_send) > target_row_count: + limited = True + break + + return to_send, new_id, limited + + async def process_replication_rows( + self, stream_name: str, instance_name: str, token: int, rows: list + ): + if stream_name != PresenceFederationStream.NAME: + return + + # We keep track of the current tokens (so that we can catch up with anything we missed after a disconnect) + self._current_tokens[instance_name] = token + + # If we're a federation sender we pull out the presence states to send + # and forward them on. + if not self._federation: + return + + hosts_to_users = {} # type: Dict[str, Set[str]] + for row in rows: + hosts_to_users.setdefault(row.destination, set()).add(row.user_id) + + for host, user_ids in hosts_to_users.items(): + states = await self._presence_handler.current_state_for_users(user_ids) + self._federation.send_presence_to_destinations( + states=states.values(), + destinations=[host], + ) diff --git a/synapse/replication/tcp/client.py b/synapse/replication/tcp/client.py index ce5d651cb8..4f3c6a18b6 100644 --- a/synapse/replication/tcp/client.py +++ b/synapse/replication/tcp/client.py @@ -29,7 +29,6 @@ from synapse.replication.tcp.streams import ( AccountDataStream, DeviceListsStream, GroupServerStream, - PresenceStream, PushersStream, PushRulesStream, ReceiptsStream, @@ -191,8 +190,6 @@ class ReplicationDataHandler: self.stop_pusher(row.user_id, row.app_id, row.pushkey) else: await self.start_pusher(row.user_id, row.app_id, row.pushkey) - elif stream_name == PresenceStream.NAME: - await self._presence_handler.process_replication_rows(token, rows) elif stream_name == EventsStream.NAME: # We shouldn't get multiple rows per token for events stream, so # we don't need to optimise this for multiple rows. @@ -221,6 +218,10 @@ class ReplicationDataHandler: membership=row.data.membership, ) + await self._presence_handler.process_replication_rows( + stream_name, instance_name, token, rows + ) + # Notify any waiting deferreds. The list is ordered by position so we # just iterate through the list until we reach a position that is # greater than the received row position. diff --git a/synapse/replication/tcp/streams/__init__.py b/synapse/replication/tcp/streams/__init__.py index fb74ac4e98..4c0023c68a 100644 --- a/synapse/replication/tcp/streams/__init__.py +++ b/synapse/replication/tcp/streams/__init__.py @@ -30,6 +30,7 @@ from synapse.replication.tcp.streams._base import ( CachesStream, DeviceListsStream, GroupServerStream, + PresenceFederationStream, PresenceStream, PublicRoomsStream, PushersStream, @@ -50,6 +51,7 @@ STREAMS_MAP = { EventsStream, BackfillStream, PresenceStream, + PresenceFederationStream, TypingStream, ReceiptsStream, PushRulesStream, @@ -71,6 +73,7 @@ __all__ = [ "Stream", "BackfillStream", "PresenceStream", + "PresenceFederationStream", "TypingStream", "ReceiptsStream", "PushRulesStream", diff --git a/synapse/replication/tcp/streams/_base.py b/synapse/replication/tcp/streams/_base.py index 520c45f151..9d75a89f1c 100644 --- a/synapse/replication/tcp/streams/_base.py +++ b/synapse/replication/tcp/streams/_base.py @@ -290,6 +290,30 @@ class PresenceStream(Stream): ) +class PresenceFederationStream(Stream): + """A stream used to send ad hoc presence updates over federation. + + Streams the remote destination and the user ID of the presence state to + send. + """ + + @attr.s(slots=True, auto_attribs=True) + class PresenceFederationStreamRow: + destination: str + user_id: str + + NAME = "presence_federation" + ROW_TYPE = PresenceFederationStreamRow + + def __init__(self, hs: "HomeServer"): + federation_queue = hs.get_presence_handler().get_federation_queue() + super().__init__( + hs.get_instance_name(), + federation_queue.get_current_token, + federation_queue.get_replication_rows, + ) + + class TypingStream(Stream): TypingStreamRow = namedtuple( "TypingStreamRow", ("room_id", "user_ids") # str # list(str) diff --git a/tests/handlers/test_presence.py b/tests/handlers/test_presence.py index 2d12e82897..61271cd084 100644 --- a/tests/handlers/test_presence.py +++ b/tests/handlers/test_presence.py @@ -21,6 +21,7 @@ from synapse.api.constants import EventTypes, Membership, PresenceState from synapse.api.presence import UserPresenceState from synapse.api.room_versions import KNOWN_ROOM_VERSIONS from synapse.events.builder import EventBuilder +from synapse.federation.sender import FederationSender from synapse.handlers.presence import ( EXTERNAL_PROCESS_EXPIRY, FEDERATION_PING_INTERVAL, @@ -471,6 +472,168 @@ class PresenceHandlerTestCase(unittest.HomeserverTestCase): self.assertEqual(state.state, PresenceState.OFFLINE) +class PresenceFederationQueueTestCase(unittest.HomeserverTestCase): + def prepare(self, reactor, clock, hs): + self.presence_handler = hs.get_presence_handler() + self.clock = hs.get_clock() + self.instance_name = hs.get_instance_name() + + self.queue = self.presence_handler.get_federation_queue() + + def test_send_and_get(self): + state1 = UserPresenceState.default("@user1:test") + state2 = UserPresenceState.default("@user2:test") + state3 = UserPresenceState.default("@user3:test") + + prev_token = self.queue.get_current_token(self.instance_name) + + self.queue.send_presence_to_destinations((state1, state2), ("dest1", "dest2")) + self.queue.send_presence_to_destinations((state3,), ("dest3",)) + + now_token = self.queue.get_current_token(self.instance_name) + + rows, upto_token, limited = self.get_success( + self.queue.get_replication_rows("master", prev_token, now_token, 10) + ) + + self.assertEqual(upto_token, now_token) + self.assertFalse(limited) + + expected_rows = [ + (1, ("dest1", "@user1:test")), + (1, ("dest2", "@user1:test")), + (1, ("dest1", "@user2:test")), + (1, ("dest2", "@user2:test")), + (2, ("dest3", "@user3:test")), + ] + + self.assertCountEqual(rows, expected_rows) + + def test_send_and_get_split(self): + state1 = UserPresenceState.default("@user1:test") + state2 = UserPresenceState.default("@user2:test") + state3 = UserPresenceState.default("@user3:test") + + prev_token = self.queue.get_current_token(self.instance_name) + + self.queue.send_presence_to_destinations((state1, state2), ("dest1", "dest2")) + + now_token = self.queue.get_current_token(self.instance_name) + + self.queue.send_presence_to_destinations((state3,), ("dest3",)) + + rows, upto_token, limited = self.get_success( + self.queue.get_replication_rows("master", prev_token, now_token, 10) + ) + + self.assertEqual(upto_token, now_token) + self.assertFalse(limited) + + expected_rows = [ + (1, ("dest1", "@user1:test")), + (1, ("dest2", "@user1:test")), + (1, ("dest1", "@user2:test")), + (1, ("dest2", "@user2:test")), + ] + + self.assertCountEqual(rows, expected_rows) + + def test_clear_queue_all(self): + state1 = UserPresenceState.default("@user1:test") + state2 = UserPresenceState.default("@user2:test") + state3 = UserPresenceState.default("@user3:test") + + prev_token = self.queue.get_current_token(self.instance_name) + + self.queue.send_presence_to_destinations((state1, state2), ("dest1", "dest2")) + self.queue.send_presence_to_destinations((state3,), ("dest3",)) + + self.reactor.advance(10 * 60 * 1000) + + now_token = self.queue.get_current_token(self.instance_name) + + rows, upto_token, limited = self.get_success( + self.queue.get_replication_rows("master", prev_token, now_token, 10) + ) + self.assertEqual(upto_token, now_token) + self.assertFalse(limited) + self.assertCountEqual(rows, []) + + prev_token = self.queue.get_current_token(self.instance_name) + + self.queue.send_presence_to_destinations((state1, state2), ("dest1", "dest2")) + self.queue.send_presence_to_destinations((state3,), ("dest3",)) + + now_token = self.queue.get_current_token(self.instance_name) + + rows, upto_token, limited = self.get_success( + self.queue.get_replication_rows("master", prev_token, now_token, 10) + ) + self.assertEqual(upto_token, now_token) + self.assertFalse(limited) + + expected_rows = [ + (3, ("dest1", "@user1:test")), + (3, ("dest2", "@user1:test")), + (3, ("dest1", "@user2:test")), + (3, ("dest2", "@user2:test")), + (4, ("dest3", "@user3:test")), + ] + + self.assertCountEqual(rows, expected_rows) + + def test_partially_clear_queue(self): + state1 = UserPresenceState.default("@user1:test") + state2 = UserPresenceState.default("@user2:test") + state3 = UserPresenceState.default("@user3:test") + + prev_token = self.queue.get_current_token(self.instance_name) + + self.queue.send_presence_to_destinations((state1, state2), ("dest1", "dest2")) + + self.reactor.advance(2 * 60 * 1000) + + self.queue.send_presence_to_destinations((state3,), ("dest3",)) + + self.reactor.advance(4 * 60 * 1000) + + now_token = self.queue.get_current_token(self.instance_name) + + rows, upto_token, limited = self.get_success( + self.queue.get_replication_rows("master", prev_token, now_token, 10) + ) + self.assertEqual(upto_token, now_token) + self.assertFalse(limited) + + expected_rows = [ + (2, ("dest3", "@user3:test")), + ] + self.assertCountEqual(rows, []) + + prev_token = self.queue.get_current_token(self.instance_name) + + self.queue.send_presence_to_destinations((state1, state2), ("dest1", "dest2")) + self.queue.send_presence_to_destinations((state3,), ("dest3",)) + + now_token = self.queue.get_current_token(self.instance_name) + + rows, upto_token, limited = self.get_success( + self.queue.get_replication_rows("master", prev_token, now_token, 10) + ) + self.assertEqual(upto_token, now_token) + self.assertFalse(limited) + + expected_rows = [ + (3, ("dest1", "@user1:test")), + (3, ("dest2", "@user1:test")), + (3, ("dest1", "@user2:test")), + (3, ("dest2", "@user2:test")), + (4, ("dest3", "@user3:test")), + ] + + self.assertCountEqual(rows, expected_rows) + + class PresenceJoinTestCase(unittest.HomeserverTestCase): """Tests remote servers get told about presence of users in the room when they join and when new local users join. @@ -482,10 +645,17 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase): def make_homeserver(self, reactor, clock): hs = self.setup_test_homeserver( - "server", federation_http_client=None, federation_sender=Mock() + "server", + federation_http_client=None, + federation_sender=Mock(spec=FederationSender), ) return hs + def default_config(self): + config = super().default_config() + config["send_federation"] = True + return config + def prepare(self, reactor, clock, hs): self.federation_sender = hs.get_federation_sender() self.event_builder_factory = hs.get_event_builder_factory() @@ -529,9 +699,6 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase): # Add a new remote server to the room self._add_new_user(room_id, "@alice:server2") - # We shouldn't have sent out any local presence *updates* - self.federation_sender.send_presence.assert_not_called() - # When new server is joined we send it the local users presence states. # We expect to only see user @test2:server, as @test:server is offline # and has a zero last_active_ts @@ -550,7 +717,6 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase): self.federation_sender.reset_mock() self._add_new_user(room_id, "@bob:server3") - self.federation_sender.send_presence.assert_not_called() self.federation_sender.send_presence_to_destinations.assert_called_once_with( destinations=["server3"], states={expected_state} ) @@ -595,9 +761,6 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase): self.reactor.pump([0]) # Wait for presence updates to be handled - # We shouldn't have sent out any local presence *updates* - self.federation_sender.send_presence.assert_not_called() - # We expect to only send test2 presence to server2 and server3 expected_state = self.get_success( self.presence_handler.current_state_for_user("@test2:server") -- cgit 1.5.1 From b8c5f6fddbc8c3203c2841500767ef2fc9dc6ff6 Mon Sep 17 00:00:00 2001 From: Andrew Morgan Date: Tue, 20 Apr 2021 17:11:36 +0100 Subject: Mention Prometheus metrics regression in v1.32.0 --- CHANGES.md | 6 ++++++ UPGRADE.rst | 9 +++++++++ 2 files changed, 15 insertions(+) diff --git a/CHANGES.md b/CHANGES.md index 170d1e447d..7713328f12 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -1,6 +1,12 @@ Synapse 1.32.0 (2021-04-20) =========================== +**Note:** This release introduces [a regression](https://githubcom/matrix-org/synapse/issues/9853) +that can overwhelm connected Prometheus instances. This issue was not present in +Synapse v1.32.0rc1. It is recommended not to update to this release. If you have +upgraded to v1.32.0 already, please downgrade to v1.31.0. This issue will be +resolved in a subsequent release version shortly. + **Note:** This release requires Python 3.6+ and Postgres 9.6+ or SQLite 3.22+. This release removes the deprecated `GET /_synapse/admin/v1/users/` admin API. Please use the [v2 API](https://github.com/matrix-org/synapse/blob/develop/docs/admin_api/user_admin_api.rst#query-user-account) instead, which has improved capabilities. diff --git a/UPGRADE.rst b/UPGRADE.rst index 7a9b869055..c8dce62227 100644 --- a/UPGRADE.rst +++ b/UPGRADE.rst @@ -88,6 +88,15 @@ for example: Upgrading to v1.32.0 ==================== +Regression causing connected Prometheus instances to become overwhelmed +----------------------------------------------------------------------- + +This release introduces `a regression `_ +that can overwhelm connected Prometheus instances. This issue was not present in +Synapse v1.32.0rc1. It is recommended not to update to this release. If you have +upgraded to v1.32.0 already, please downgrade to v1.31.0. This issue will be +resolved in a subsequent release version shortly. + Dropping support for old Python, Postgres and SQLite versions ------------------------------------------------------------- -- cgit 1.5.1 From 683d6f75af0e941e9ab3bc0a985aa6ed5cc7a238 Mon Sep 17 00:00:00 2001 From: Patrick Cloke Date: Tue, 20 Apr 2021 14:55:20 -0400 Subject: Rename handler and config modules which end in handler/config. (#9816) --- changelog.d/9816.misc | 1 + docs/sample_config.yaml | 2 +- docs/sso_mapping_providers.md | 4 +- synapse/config/_base.pyi | 20 +- synapse/config/consent.py | 119 +++ synapse/config/consent_config.py | 119 --- synapse/config/homeserver.py | 10 +- synapse/config/jwt.py | 108 +++ synapse/config/jwt_config.py | 108 --- synapse/config/oidc.py | 595 +++++++++++++ synapse/config/oidc_config.py | 590 ------------- synapse/config/saml2.py | 420 +++++++++ synapse/config/saml2_config.py | 415 --------- synapse/config/server_notices.py | 83 ++ synapse/config/server_notices_config.py | 83 -- synapse/handlers/cas.py | 393 +++++++++ synapse/handlers/cas_handler.py | 393 --------- synapse/handlers/oidc.py | 1384 +++++++++++++++++++++++++++++ synapse/handlers/oidc_handler.py | 1387 ------------------------------ synapse/handlers/saml.py | 517 +++++++++++ synapse/handlers/saml_handler.py | 517 ----------- synapse/rest/client/v2_alpha/register.py | 2 +- synapse/server.py | 10 +- tests/handlers/test_cas.py | 2 +- tests/handlers/test_oidc.py | 8 +- 25 files changed, 3649 insertions(+), 3641 deletions(-) create mode 100644 changelog.d/9816.misc create mode 100644 synapse/config/consent.py delete mode 100644 synapse/config/consent_config.py create mode 100644 synapse/config/jwt.py delete mode 100644 synapse/config/jwt_config.py create mode 100644 synapse/config/oidc.py delete mode 100644 synapse/config/oidc_config.py create mode 100644 synapse/config/saml2.py delete mode 100644 synapse/config/saml2_config.py create mode 100644 synapse/config/server_notices.py delete mode 100644 synapse/config/server_notices_config.py create mode 100644 synapse/handlers/cas.py delete mode 100644 synapse/handlers/cas_handler.py create mode 100644 synapse/handlers/oidc.py delete mode 100644 synapse/handlers/oidc_handler.py create mode 100644 synapse/handlers/saml.py delete mode 100644 synapse/handlers/saml_handler.py diff --git a/changelog.d/9816.misc b/changelog.d/9816.misc new file mode 100644 index 0000000000..d098122500 --- /dev/null +++ b/changelog.d/9816.misc @@ -0,0 +1 @@ +Rename some handlers and config modules to not duplicate the top-level module. diff --git a/docs/sample_config.yaml b/docs/sample_config.yaml index d260d76259..e0350279ad 100644 --- a/docs/sample_config.yaml +++ b/docs/sample_config.yaml @@ -1900,7 +1900,7 @@ saml2_config: # sub-properties: # # module: The class name of a custom mapping module. Default is -# 'synapse.handlers.oidc_handler.JinjaOidcMappingProvider'. +# 'synapse.handlers.oidc.JinjaOidcMappingProvider'. # See https://github.com/matrix-org/synapse/blob/master/docs/sso_mapping_providers.md#openid-mapping-providers # for information on implementing a custom mapping provider. # diff --git a/docs/sso_mapping_providers.md b/docs/sso_mapping_providers.md index e1d6ede7ba..50020d1a4a 100644 --- a/docs/sso_mapping_providers.md +++ b/docs/sso_mapping_providers.md @@ -106,7 +106,7 @@ A custom mapping provider must specify the following methods: Synapse has a built-in OpenID mapping provider if a custom provider isn't specified in the config. It is located at -[`synapse.handlers.oidc_handler.JinjaOidcMappingProvider`](../synapse/handlers/oidc_handler.py). +[`synapse.handlers.oidc.JinjaOidcMappingProvider`](../synapse/handlers/oidc.py). ## SAML Mapping Providers @@ -190,4 +190,4 @@ A custom mapping provider must specify the following methods: Synapse has a built-in SAML mapping provider if a custom provider isn't specified in the config. It is located at -[`synapse.handlers.saml_handler.DefaultSamlMappingProvider`](../synapse/handlers/saml_handler.py). +[`synapse.handlers.saml.DefaultSamlMappingProvider`](../synapse/handlers/saml.py). diff --git a/synapse/config/_base.pyi b/synapse/config/_base.pyi index ddec356a07..ff9abbc232 100644 --- a/synapse/config/_base.pyi +++ b/synapse/config/_base.pyi @@ -7,16 +7,16 @@ from synapse.config import ( auth, captcha, cas, - consent_config, + consent, database, emailconfig, experimental, groups, - jwt_config, + jwt, key, logger, metrics, - oidc_config, + oidc, password_auth_providers, push, ratelimiting, @@ -24,9 +24,9 @@ from synapse.config import ( registration, repository, room_directory, - saml2_config, + saml2, server, - server_notices_config, + server_notices, spam_checker, sso, stats, @@ -65,11 +65,11 @@ class RootConfig: api: api.ApiConfig appservice: appservice.AppServiceConfig key: key.KeyConfig - saml2: saml2_config.SAML2Config + saml2: saml2.SAML2Config cas: cas.CasConfig sso: sso.SSOConfig - oidc: oidc_config.OIDCConfig - jwt: jwt_config.JWTConfig + oidc: oidc.OIDCConfig + jwt: jwt.JWTConfig auth: auth.AuthConfig email: emailconfig.EmailConfig worker: workers.WorkerConfig @@ -78,9 +78,9 @@ class RootConfig: spamchecker: spam_checker.SpamCheckerConfig groups: groups.GroupsConfig userdirectory: user_directory.UserDirectoryConfig - consent: consent_config.ConsentConfig + consent: consent.ConsentConfig stats: stats.StatsConfig - servernotices: server_notices_config.ServerNoticesConfig + servernotices: server_notices.ServerNoticesConfig roomdirectory: room_directory.RoomDirectoryConfig thirdpartyrules: third_party_event_rules.ThirdPartyRulesConfig tracer: tracer.TracerConfig diff --git a/synapse/config/consent.py b/synapse/config/consent.py new file mode 100644 index 0000000000..30d07cc219 --- /dev/null +++ b/synapse/config/consent.py @@ -0,0 +1,119 @@ +# Copyright 2018 New Vector Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from os import path + +from synapse.config import ConfigError + +from ._base import Config + +DEFAULT_CONFIG = """\ +# User Consent configuration +# +# for detailed instructions, see +# https://github.com/matrix-org/synapse/blob/master/docs/consent_tracking.md +# +# Parts of this section are required if enabling the 'consent' resource under +# 'listeners', in particular 'template_dir' and 'version'. +# +# 'template_dir' gives the location of the templates for the HTML forms. +# This directory should contain one subdirectory per language (eg, 'en', 'fr'), +# and each language directory should contain the policy document (named as +# '.html') and a success page (success.html). +# +# 'version' specifies the 'current' version of the policy document. It defines +# the version to be served by the consent resource if there is no 'v' +# parameter. +# +# 'server_notice_content', if enabled, will send a user a "Server Notice" +# asking them to consent to the privacy policy. The 'server_notices' section +# must also be configured for this to work. Notices will *not* be sent to +# guest users unless 'send_server_notice_to_guests' is set to true. +# +# 'block_events_error', if set, will block any attempts to send events +# until the user consents to the privacy policy. The value of the setting is +# used as the text of the error. +# +# 'require_at_registration', if enabled, will add a step to the registration +# process, similar to how captcha works. Users will be required to accept the +# policy before their account is created. +# +# 'policy_name' is the display name of the policy users will see when registering +# for an account. Has no effect unless `require_at_registration` is enabled. +# Defaults to "Privacy Policy". +# +#user_consent: +# template_dir: res/templates/privacy +# version: 1.0 +# server_notice_content: +# msgtype: m.text +# body: >- +# To continue using this homeserver you must review and agree to the +# terms and conditions at %(consent_uri)s +# send_server_notice_to_guests: true +# block_events_error: >- +# To continue using this homeserver you must review and agree to the +# terms and conditions at %(consent_uri)s +# require_at_registration: false +# policy_name: Privacy Policy +# +""" + + +class ConsentConfig(Config): + + section = "consent" + + def __init__(self, *args): + super().__init__(*args) + + self.user_consent_version = None + self.user_consent_template_dir = None + self.user_consent_server_notice_content = None + self.user_consent_server_notice_to_guests = False + self.block_events_without_consent_error = None + self.user_consent_at_registration = False + self.user_consent_policy_name = "Privacy Policy" + + def read_config(self, config, **kwargs): + consent_config = config.get("user_consent") + self.terms_template = self.read_template("terms.html") + + if consent_config is None: + return + self.user_consent_version = str(consent_config["version"]) + self.user_consent_template_dir = self.abspath(consent_config["template_dir"]) + if not path.isdir(self.user_consent_template_dir): + raise ConfigError( + "Could not find template directory '%s'" + % (self.user_consent_template_dir,) + ) + self.user_consent_server_notice_content = consent_config.get( + "server_notice_content" + ) + self.block_events_without_consent_error = consent_config.get( + "block_events_error" + ) + self.user_consent_server_notice_to_guests = bool( + consent_config.get("send_server_notice_to_guests", False) + ) + self.user_consent_at_registration = bool( + consent_config.get("require_at_registration", False) + ) + self.user_consent_policy_name = consent_config.get( + "policy_name", "Privacy Policy" + ) + + def generate_config_section(self, **kwargs): + return DEFAULT_CONFIG diff --git a/synapse/config/consent_config.py b/synapse/config/consent_config.py deleted file mode 100644 index 30d07cc219..0000000000 --- a/synapse/config/consent_config.py +++ /dev/null @@ -1,119 +0,0 @@ -# Copyright 2018 New Vector Ltd -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from os import path - -from synapse.config import ConfigError - -from ._base import Config - -DEFAULT_CONFIG = """\ -# User Consent configuration -# -# for detailed instructions, see -# https://github.com/matrix-org/synapse/blob/master/docs/consent_tracking.md -# -# Parts of this section are required if enabling the 'consent' resource under -# 'listeners', in particular 'template_dir' and 'version'. -# -# 'template_dir' gives the location of the templates for the HTML forms. -# This directory should contain one subdirectory per language (eg, 'en', 'fr'), -# and each language directory should contain the policy document (named as -# '.html') and a success page (success.html). -# -# 'version' specifies the 'current' version of the policy document. It defines -# the version to be served by the consent resource if there is no 'v' -# parameter. -# -# 'server_notice_content', if enabled, will send a user a "Server Notice" -# asking them to consent to the privacy policy. The 'server_notices' section -# must also be configured for this to work. Notices will *not* be sent to -# guest users unless 'send_server_notice_to_guests' is set to true. -# -# 'block_events_error', if set, will block any attempts to send events -# until the user consents to the privacy policy. The value of the setting is -# used as the text of the error. -# -# 'require_at_registration', if enabled, will add a step to the registration -# process, similar to how captcha works. Users will be required to accept the -# policy before their account is created. -# -# 'policy_name' is the display name of the policy users will see when registering -# for an account. Has no effect unless `require_at_registration` is enabled. -# Defaults to "Privacy Policy". -# -#user_consent: -# template_dir: res/templates/privacy -# version: 1.0 -# server_notice_content: -# msgtype: m.text -# body: >- -# To continue using this homeserver you must review and agree to the -# terms and conditions at %(consent_uri)s -# send_server_notice_to_guests: true -# block_events_error: >- -# To continue using this homeserver you must review and agree to the -# terms and conditions at %(consent_uri)s -# require_at_registration: false -# policy_name: Privacy Policy -# -""" - - -class ConsentConfig(Config): - - section = "consent" - - def __init__(self, *args): - super().__init__(*args) - - self.user_consent_version = None - self.user_consent_template_dir = None - self.user_consent_server_notice_content = None - self.user_consent_server_notice_to_guests = False - self.block_events_without_consent_error = None - self.user_consent_at_registration = False - self.user_consent_policy_name = "Privacy Policy" - - def read_config(self, config, **kwargs): - consent_config = config.get("user_consent") - self.terms_template = self.read_template("terms.html") - - if consent_config is None: - return - self.user_consent_version = str(consent_config["version"]) - self.user_consent_template_dir = self.abspath(consent_config["template_dir"]) - if not path.isdir(self.user_consent_template_dir): - raise ConfigError( - "Could not find template directory '%s'" - % (self.user_consent_template_dir,) - ) - self.user_consent_server_notice_content = consent_config.get( - "server_notice_content" - ) - self.block_events_without_consent_error = consent_config.get( - "block_events_error" - ) - self.user_consent_server_notice_to_guests = bool( - consent_config.get("send_server_notice_to_guests", False) - ) - self.user_consent_at_registration = bool( - consent_config.get("require_at_registration", False) - ) - self.user_consent_policy_name = consent_config.get( - "policy_name", "Privacy Policy" - ) - - def generate_config_section(self, **kwargs): - return DEFAULT_CONFIG diff --git a/synapse/config/homeserver.py b/synapse/config/homeserver.py index 58e3bcd511..c23b66c88c 100644 --- a/synapse/config/homeserver.py +++ b/synapse/config/homeserver.py @@ -20,17 +20,17 @@ from .auth import AuthConfig from .cache import CacheConfig from .captcha import CaptchaConfig from .cas import CasConfig -from .consent_config import ConsentConfig +from .consent import ConsentConfig from .database import DatabaseConfig from .emailconfig import EmailConfig from .experimental import ExperimentalConfig from .federation import FederationConfig from .groups import GroupsConfig -from .jwt_config import JWTConfig +from .jwt import JWTConfig from .key import KeyConfig from .logger import LoggingConfig from .metrics import MetricsConfig -from .oidc_config import OIDCConfig +from .oidc import OIDCConfig from .password_auth_providers import PasswordAuthProviderConfig from .push import PushConfig from .ratelimiting import RatelimitConfig @@ -39,9 +39,9 @@ from .registration import RegistrationConfig from .repository import ContentRepositoryConfig from .room import RoomConfig from .room_directory import RoomDirectoryConfig -from .saml2_config import SAML2Config +from .saml2 import SAML2Config from .server import ServerConfig -from .server_notices_config import ServerNoticesConfig +from .server_notices import ServerNoticesConfig from .spam_checker import SpamCheckerConfig from .sso import SSOConfig from .stats import StatsConfig diff --git a/synapse/config/jwt.py b/synapse/config/jwt.py new file mode 100644 index 0000000000..9e07e73008 --- /dev/null +++ b/synapse/config/jwt.py @@ -0,0 +1,108 @@ +# Copyright 2015 Niklas Riekenbrauck +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from ._base import Config, ConfigError + +MISSING_JWT = """Missing jwt library. This is required for jwt login. + + Install by running: + pip install pyjwt + """ + + +class JWTConfig(Config): + section = "jwt" + + def read_config(self, config, **kwargs): + jwt_config = config.get("jwt_config", None) + if jwt_config: + self.jwt_enabled = jwt_config.get("enabled", False) + self.jwt_secret = jwt_config["secret"] + self.jwt_algorithm = jwt_config["algorithm"] + + # The issuer and audiences are optional, if provided, it is asserted + # that the claims exist on the JWT. + self.jwt_issuer = jwt_config.get("issuer") + self.jwt_audiences = jwt_config.get("audiences") + + try: + import jwt + + jwt # To stop unused lint. + except ImportError: + raise ConfigError(MISSING_JWT) + else: + self.jwt_enabled = False + self.jwt_secret = None + self.jwt_algorithm = None + self.jwt_issuer = None + self.jwt_audiences = None + + def generate_config_section(self, **kwargs): + return """\ + # JSON web token integration. The following settings can be used to make + # Synapse JSON web tokens for authentication, instead of its internal + # password database. + # + # Each JSON Web Token needs to contain a "sub" (subject) claim, which is + # used as the localpart of the mxid. + # + # Additionally, the expiration time ("exp"), not before time ("nbf"), + # and issued at ("iat") claims are validated if present. + # + # Note that this is a non-standard login type and client support is + # expected to be non-existent. + # + # See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md. + # + #jwt_config: + # Uncomment the following to enable authorization using JSON web + # tokens. Defaults to false. + # + #enabled: true + + # This is either the private shared secret or the public key used to + # decode the contents of the JSON web token. + # + # Required if 'enabled' is true. + # + #secret: "provided-by-your-issuer" + + # The algorithm used to sign the JSON web token. + # + # Supported algorithms are listed at + # https://pyjwt.readthedocs.io/en/latest/algorithms.html + # + # Required if 'enabled' is true. + # + #algorithm: "provided-by-your-issuer" + + # The issuer to validate the "iss" claim against. + # + # Optional, if provided the "iss" claim will be required and + # validated for all JSON web tokens. + # + #issuer: "provided-by-your-issuer" + + # A list of audiences to validate the "aud" claim against. + # + # Optional, if provided the "aud" claim will be required and + # validated for all JSON web tokens. + # + # Note that if the "aud" claim is included in a JSON web token then + # validation will fail without configuring audiences. + # + #audiences: + # - "provided-by-your-issuer" + """ diff --git a/synapse/config/jwt_config.py b/synapse/config/jwt_config.py deleted file mode 100644 index 9e07e73008..0000000000 --- a/synapse/config/jwt_config.py +++ /dev/null @@ -1,108 +0,0 @@ -# Copyright 2015 Niklas Riekenbrauck -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from ._base import Config, ConfigError - -MISSING_JWT = """Missing jwt library. This is required for jwt login. - - Install by running: - pip install pyjwt - """ - - -class JWTConfig(Config): - section = "jwt" - - def read_config(self, config, **kwargs): - jwt_config = config.get("jwt_config", None) - if jwt_config: - self.jwt_enabled = jwt_config.get("enabled", False) - self.jwt_secret = jwt_config["secret"] - self.jwt_algorithm = jwt_config["algorithm"] - - # The issuer and audiences are optional, if provided, it is asserted - # that the claims exist on the JWT. - self.jwt_issuer = jwt_config.get("issuer") - self.jwt_audiences = jwt_config.get("audiences") - - try: - import jwt - - jwt # To stop unused lint. - except ImportError: - raise ConfigError(MISSING_JWT) - else: - self.jwt_enabled = False - self.jwt_secret = None - self.jwt_algorithm = None - self.jwt_issuer = None - self.jwt_audiences = None - - def generate_config_section(self, **kwargs): - return """\ - # JSON web token integration. The following settings can be used to make - # Synapse JSON web tokens for authentication, instead of its internal - # password database. - # - # Each JSON Web Token needs to contain a "sub" (subject) claim, which is - # used as the localpart of the mxid. - # - # Additionally, the expiration time ("exp"), not before time ("nbf"), - # and issued at ("iat") claims are validated if present. - # - # Note that this is a non-standard login type and client support is - # expected to be non-existent. - # - # See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md. - # - #jwt_config: - # Uncomment the following to enable authorization using JSON web - # tokens. Defaults to false. - # - #enabled: true - - # This is either the private shared secret or the public key used to - # decode the contents of the JSON web token. - # - # Required if 'enabled' is true. - # - #secret: "provided-by-your-issuer" - - # The algorithm used to sign the JSON web token. - # - # Supported algorithms are listed at - # https://pyjwt.readthedocs.io/en/latest/algorithms.html - # - # Required if 'enabled' is true. - # - #algorithm: "provided-by-your-issuer" - - # The issuer to validate the "iss" claim against. - # - # Optional, if provided the "iss" claim will be required and - # validated for all JSON web tokens. - # - #issuer: "provided-by-your-issuer" - - # A list of audiences to validate the "aud" claim against. - # - # Optional, if provided the "aud" claim will be required and - # validated for all JSON web tokens. - # - # Note that if the "aud" claim is included in a JSON web token then - # validation will fail without configuring audiences. - # - #audiences: - # - "provided-by-your-issuer" - """ diff --git a/synapse/config/oidc.py b/synapse/config/oidc.py new file mode 100644 index 0000000000..72402eb81d --- /dev/null +++ b/synapse/config/oidc.py @@ -0,0 +1,595 @@ +# Copyright 2020 Quentin Gliech +# Copyright 2020-2021 The Matrix.org Foundation C.I.C. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from collections import Counter +from typing import Iterable, List, Mapping, Optional, Tuple, Type + +import attr + +from synapse.config._util import validate_config +from synapse.config.sso import SsoAttributeRequirement +from synapse.python_dependencies import DependencyException, check_requirements +from synapse.types import Collection, JsonDict +from synapse.util.module_loader import load_module +from synapse.util.stringutils import parse_and_validate_mxc_uri + +from ._base import Config, ConfigError, read_file + +DEFAULT_USER_MAPPING_PROVIDER = "synapse.handlers.oidc.JinjaOidcMappingProvider" +# The module that JinjaOidcMappingProvider is in was renamed, we want to +# transparently handle both the same. +LEGACY_USER_MAPPING_PROVIDER = "synapse.handlers.oidc_handler.JinjaOidcMappingProvider" + + +class OIDCConfig(Config): + section = "oidc" + + def read_config(self, config, **kwargs): + self.oidc_providers = tuple(_parse_oidc_provider_configs(config)) + if not self.oidc_providers: + return + + try: + check_requirements("oidc") + except DependencyException as e: + raise ConfigError( + e.message # noqa: B306, DependencyException.message is a property + ) from e + + # check we don't have any duplicate idp_ids now. (The SSO handler will also + # check for duplicates when the REST listeners get registered, but that happens + # after synapse has forked so doesn't give nice errors.) + c = Counter([i.idp_id for i in self.oidc_providers]) + for idp_id, count in c.items(): + if count > 1: + raise ConfigError( + "Multiple OIDC providers have the idp_id %r." % idp_id + ) + + public_baseurl = self.public_baseurl + if public_baseurl is None: + raise ConfigError("oidc_config requires a public_baseurl to be set") + self.oidc_callback_url = public_baseurl + "_synapse/client/oidc/callback" + + @property + def oidc_enabled(self) -> bool: + # OIDC is enabled if we have a provider + return bool(self.oidc_providers) + + def generate_config_section(self, config_dir_path, server_name, **kwargs): + return """\ + # List of OpenID Connect (OIDC) / OAuth 2.0 identity providers, for registration + # and login. + # + # Options for each entry include: + # + # idp_id: a unique identifier for this identity provider. Used internally + # by Synapse; should be a single word such as 'github'. + # + # Note that, if this is changed, users authenticating via that provider + # will no longer be recognised as the same user! + # + # (Use "oidc" here if you are migrating from an old "oidc_config" + # configuration.) + # + # idp_name: A user-facing name for this identity provider, which is used to + # offer the user a choice of login mechanisms. + # + # idp_icon: An optional icon for this identity provider, which is presented + # by clients and Synapse's own IdP picker page. If given, must be an + # MXC URI of the format mxc:///. (An easy way to + # obtain such an MXC URI is to upload an image to an (unencrypted) room + # and then copy the "url" from the source of the event.) + # + # idp_brand: An optional brand for this identity provider, allowing clients + # to style the login flow according to the identity provider in question. + # See the spec for possible options here. + # + # discover: set to 'false' to disable the use of the OIDC discovery mechanism + # to discover endpoints. Defaults to true. + # + # issuer: Required. The OIDC issuer. Used to validate tokens and (if discovery + # is enabled) to discover the provider's endpoints. + # + # client_id: Required. oauth2 client id to use. + # + # client_secret: oauth2 client secret to use. May be omitted if + # client_secret_jwt_key is given, or if client_auth_method is 'none'. + # + # client_secret_jwt_key: Alternative to client_secret: details of a key used + # to create a JSON Web Token to be used as an OAuth2 client secret. If + # given, must be a dictionary with the following properties: + # + # key: a pem-encoded signing key. Must be a suitable key for the + # algorithm specified. Required unless 'key_file' is given. + # + # key_file: the path to file containing a pem-encoded signing key file. + # Required unless 'key' is given. + # + # jwt_header: a dictionary giving properties to include in the JWT + # header. Must include the key 'alg', giving the algorithm used to + # sign the JWT, such as "ES256", using the JWA identifiers in + # RFC7518. + # + # jwt_payload: an optional dictionary giving properties to include in + # the JWT payload. Normally this should include an 'iss' key. + # + # client_auth_method: auth method to use when exchanging the token. Valid + # values are 'client_secret_basic' (default), 'client_secret_post' and + # 'none'. + # + # scopes: list of scopes to request. This should normally include the "openid" + # scope. Defaults to ["openid"]. + # + # authorization_endpoint: the oauth2 authorization endpoint. Required if + # provider discovery is disabled. + # + # token_endpoint: the oauth2 token endpoint. Required if provider discovery is + # disabled. + # + # userinfo_endpoint: the OIDC userinfo endpoint. Required if discovery is + # disabled and the 'openid' scope is not requested. + # + # jwks_uri: URI where to fetch the JWKS. Required if discovery is disabled and + # the 'openid' scope is used. + # + # skip_verification: set to 'true' to skip metadata verification. Use this if + # you are connecting to a provider that is not OpenID Connect compliant. + # Defaults to false. Avoid this in production. + # + # user_profile_method: Whether to fetch the user profile from the userinfo + # endpoint. Valid values are: 'auto' or 'userinfo_endpoint'. + # + # Defaults to 'auto', which fetches the userinfo endpoint if 'openid' is + # included in 'scopes'. Set to 'userinfo_endpoint' to always fetch the + # userinfo endpoint. + # + # allow_existing_users: set to 'true' to allow a user logging in via OIDC to + # match a pre-existing account instead of failing. This could be used if + # switching from password logins to OIDC. Defaults to false. + # + # user_mapping_provider: Configuration for how attributes returned from a OIDC + # provider are mapped onto a matrix user. This setting has the following + # sub-properties: + # + # module: The class name of a custom mapping module. Default is + # {mapping_provider!r}. + # See https://github.com/matrix-org/synapse/blob/master/docs/sso_mapping_providers.md#openid-mapping-providers + # for information on implementing a custom mapping provider. + # + # config: Configuration for the mapping provider module. This section will + # be passed as a Python dictionary to the user mapping provider + # module's `parse_config` method. + # + # For the default provider, the following settings are available: + # + # subject_claim: name of the claim containing a unique identifier + # for the user. Defaults to 'sub', which OpenID Connect + # compliant providers should provide. + # + # localpart_template: Jinja2 template for the localpart of the MXID. + # If this is not set, the user will be prompted to choose their + # own username (see 'sso_auth_account_details.html' in the 'sso' + # section of this file). + # + # display_name_template: Jinja2 template for the display name to set + # on first login. If unset, no displayname will be set. + # + # email_template: Jinja2 template for the email address of the user. + # If unset, no email address will be added to the account. + # + # extra_attributes: a map of Jinja2 templates for extra attributes + # to send back to the client during login. + # Note that these are non-standard and clients will ignore them + # without modifications. + # + # When rendering, the Jinja2 templates are given a 'user' variable, + # which is set to the claims returned by the UserInfo Endpoint and/or + # in the ID Token. + # + # It is possible to configure Synapse to only allow logins if certain attributes + # match particular values in the OIDC userinfo. The requirements can be listed under + # `attribute_requirements` as shown below. All of the listed attributes must + # match for the login to be permitted. Additional attributes can be added to + # userinfo by expanding the `scopes` section of the OIDC config to retrieve + # additional information from the OIDC provider. + # + # If the OIDC claim is a list, then the attribute must match any value in the list. + # Otherwise, it must exactly match the value of the claim. Using the example + # below, the `family_name` claim MUST be "Stephensson", but the `groups` + # claim MUST contain "admin". + # + # attribute_requirements: + # - attribute: family_name + # value: "Stephensson" + # - attribute: groups + # value: "admin" + # + # See https://github.com/matrix-org/synapse/blob/master/docs/openid.md + # for information on how to configure these options. + # + # For backwards compatibility, it is also possible to configure a single OIDC + # provider via an 'oidc_config' setting. This is now deprecated and admins are + # advised to migrate to the 'oidc_providers' format. (When doing that migration, + # use 'oidc' for the idp_id to ensure that existing users continue to be + # recognised.) + # + oidc_providers: + # Generic example + # + #- idp_id: my_idp + # idp_name: "My OpenID provider" + # idp_icon: "mxc://example.com/mediaid" + # discover: false + # issuer: "https://accounts.example.com/" + # client_id: "provided-by-your-issuer" + # client_secret: "provided-by-your-issuer" + # client_auth_method: client_secret_post + # scopes: ["openid", "profile"] + # authorization_endpoint: "https://accounts.example.com/oauth2/auth" + # token_endpoint: "https://accounts.example.com/oauth2/token" + # userinfo_endpoint: "https://accounts.example.com/userinfo" + # jwks_uri: "https://accounts.example.com/.well-known/jwks.json" + # skip_verification: true + # user_mapping_provider: + # config: + # subject_claim: "id" + # localpart_template: "{{{{ user.login }}}}" + # display_name_template: "{{{{ user.name }}}}" + # email_template: "{{{{ user.email }}}}" + # attribute_requirements: + # - attribute: userGroup + # value: "synapseUsers" + """.format( + mapping_provider=DEFAULT_USER_MAPPING_PROVIDER + ) + + +# jsonschema definition of the configuration settings for an oidc identity provider +OIDC_PROVIDER_CONFIG_SCHEMA = { + "type": "object", + "required": ["issuer", "client_id"], + "properties": { + "idp_id": { + "type": "string", + "minLength": 1, + # MSC2858 allows a maxlen of 255, but we prefix with "oidc-" + "maxLength": 250, + "pattern": "^[A-Za-z0-9._~-]+$", + }, + "idp_name": {"type": "string"}, + "idp_icon": {"type": "string"}, + "idp_brand": { + "type": "string", + "minLength": 1, + "maxLength": 255, + "pattern": "^[a-z][a-z0-9_.-]*$", + }, + "idp_unstable_brand": { + "type": "string", + "minLength": 1, + "maxLength": 255, + "pattern": "^[a-z][a-z0-9_.-]*$", + }, + "discover": {"type": "boolean"}, + "issuer": {"type": "string"}, + "client_id": {"type": "string"}, + "client_secret": {"type": "string"}, + "client_secret_jwt_key": { + "type": "object", + "required": ["jwt_header"], + "oneOf": [ + {"required": ["key"]}, + {"required": ["key_file"]}, + ], + "properties": { + "key": {"type": "string"}, + "key_file": {"type": "string"}, + "jwt_header": { + "type": "object", + "required": ["alg"], + "properties": { + "alg": {"type": "string"}, + }, + "additionalProperties": {"type": "string"}, + }, + "jwt_payload": { + "type": "object", + "additionalProperties": {"type": "string"}, + }, + }, + }, + "client_auth_method": { + "type": "string", + # the following list is the same as the keys of + # authlib.oauth2.auth.ClientAuth.DEFAULT_AUTH_METHODS. We inline it + # to avoid importing authlib here. + "enum": ["client_secret_basic", "client_secret_post", "none"], + }, + "scopes": {"type": "array", "items": {"type": "string"}}, + "authorization_endpoint": {"type": "string"}, + "token_endpoint": {"type": "string"}, + "userinfo_endpoint": {"type": "string"}, + "jwks_uri": {"type": "string"}, + "skip_verification": {"type": "boolean"}, + "user_profile_method": { + "type": "string", + "enum": ["auto", "userinfo_endpoint"], + }, + "allow_existing_users": {"type": "boolean"}, + "user_mapping_provider": {"type": ["object", "null"]}, + "attribute_requirements": { + "type": "array", + "items": SsoAttributeRequirement.JSON_SCHEMA, + }, + }, +} + +# the same as OIDC_PROVIDER_CONFIG_SCHEMA, but with compulsory idp_id and idp_name +OIDC_PROVIDER_CONFIG_WITH_ID_SCHEMA = { + "allOf": [OIDC_PROVIDER_CONFIG_SCHEMA, {"required": ["idp_id", "idp_name"]}] +} + + +# the `oidc_providers` list can either be None (as it is in the default config), or +# a list of provider configs, each of which requires an explicit ID and name. +OIDC_PROVIDER_LIST_SCHEMA = { + "oneOf": [ + {"type": "null"}, + {"type": "array", "items": OIDC_PROVIDER_CONFIG_WITH_ID_SCHEMA}, + ] +} + +# the `oidc_config` setting can either be None (which it used to be in the default +# config), or an object. If an object, it is ignored unless it has an "enabled: True" +# property. +# +# It's *possible* to represent this with jsonschema, but the resultant errors aren't +# particularly clear, so we just check for either an object or a null here, and do +# additional checks in the code. +OIDC_CONFIG_SCHEMA = {"oneOf": [{"type": "null"}, {"type": "object"}]} + +# the top-level schema can contain an "oidc_config" and/or an "oidc_providers". +MAIN_CONFIG_SCHEMA = { + "type": "object", + "properties": { + "oidc_config": OIDC_CONFIG_SCHEMA, + "oidc_providers": OIDC_PROVIDER_LIST_SCHEMA, + }, +} + + +def _parse_oidc_provider_configs(config: JsonDict) -> Iterable["OidcProviderConfig"]: + """extract and parse the OIDC provider configs from the config dict + + The configuration may contain either a single `oidc_config` object with an + `enabled: True` property, or a list of provider configurations under + `oidc_providers`, *or both*. + + Returns a generator which yields the OidcProviderConfig objects + """ + validate_config(MAIN_CONFIG_SCHEMA, config, ()) + + for i, p in enumerate(config.get("oidc_providers") or []): + yield _parse_oidc_config_dict(p, ("oidc_providers", "" % (i,))) + + # for backwards-compatibility, it is also possible to provide a single "oidc_config" + # object with an "enabled: True" property. + oidc_config = config.get("oidc_config") + if oidc_config and oidc_config.get("enabled", False): + # MAIN_CONFIG_SCHEMA checks that `oidc_config` is an object, but not that + # it matches OIDC_PROVIDER_CONFIG_SCHEMA (see the comments on OIDC_CONFIG_SCHEMA + # above), so now we need to validate it. + validate_config(OIDC_PROVIDER_CONFIG_SCHEMA, oidc_config, ("oidc_config",)) + yield _parse_oidc_config_dict(oidc_config, ("oidc_config",)) + + +def _parse_oidc_config_dict( + oidc_config: JsonDict, config_path: Tuple[str, ...] +) -> "OidcProviderConfig": + """Take the configuration dict and parse it into an OidcProviderConfig + + Raises: + ConfigError if the configuration is malformed. + """ + ump_config = oidc_config.get("user_mapping_provider", {}) + ump_config.setdefault("module", DEFAULT_USER_MAPPING_PROVIDER) + if ump_config.get("module") == LEGACY_USER_MAPPING_PROVIDER: + ump_config["module"] = DEFAULT_USER_MAPPING_PROVIDER + ump_config.setdefault("config", {}) + + ( + user_mapping_provider_class, + user_mapping_provider_config, + ) = load_module(ump_config, config_path + ("user_mapping_provider",)) + + # Ensure loaded user mapping module has defined all necessary methods + required_methods = [ + "get_remote_user_id", + "map_user_attributes", + ] + missing_methods = [ + method + for method in required_methods + if not hasattr(user_mapping_provider_class, method) + ] + if missing_methods: + raise ConfigError( + "Class %s is missing required " + "methods: %s" + % ( + user_mapping_provider_class, + ", ".join(missing_methods), + ), + config_path + ("user_mapping_provider", "module"), + ) + + idp_id = oidc_config.get("idp_id", "oidc") + + # prefix the given IDP with a prefix specific to the SSO mechanism, to avoid + # clashes with other mechs (such as SAML, CAS). + # + # We allow "oidc" as an exception so that people migrating from old-style + # "oidc_config" format (which has long used "oidc" as its idp_id) can migrate to + # a new-style "oidc_providers" entry without changing the idp_id for their provider + # (and thereby invalidating their user_external_ids data). + + if idp_id != "oidc": + idp_id = "oidc-" + idp_id + + # MSC2858 also specifies that the idp_icon must be a valid MXC uri + idp_icon = oidc_config.get("idp_icon") + if idp_icon is not None: + try: + parse_and_validate_mxc_uri(idp_icon) + except ValueError as e: + raise ConfigError( + "idp_icon must be a valid MXC URI", config_path + ("idp_icon",) + ) from e + + client_secret_jwt_key_config = oidc_config.get("client_secret_jwt_key") + client_secret_jwt_key = None # type: Optional[OidcProviderClientSecretJwtKey] + if client_secret_jwt_key_config is not None: + keyfile = client_secret_jwt_key_config.get("key_file") + if keyfile: + key = read_file(keyfile, config_path + ("client_secret_jwt_key",)) + else: + key = client_secret_jwt_key_config["key"] + client_secret_jwt_key = OidcProviderClientSecretJwtKey( + key=key, + jwt_header=client_secret_jwt_key_config["jwt_header"], + jwt_payload=client_secret_jwt_key_config.get("jwt_payload", {}), + ) + # parse attribute_requirements from config (list of dicts) into a list of SsoAttributeRequirement + attribute_requirements = [ + SsoAttributeRequirement(**x) + for x in oidc_config.get("attribute_requirements", []) + ] + + return OidcProviderConfig( + idp_id=idp_id, + idp_name=oidc_config.get("idp_name", "OIDC"), + idp_icon=idp_icon, + idp_brand=oidc_config.get("idp_brand"), + unstable_idp_brand=oidc_config.get("unstable_idp_brand"), + discover=oidc_config.get("discover", True), + issuer=oidc_config["issuer"], + client_id=oidc_config["client_id"], + client_secret=oidc_config.get("client_secret"), + client_secret_jwt_key=client_secret_jwt_key, + client_auth_method=oidc_config.get("client_auth_method", "client_secret_basic"), + scopes=oidc_config.get("scopes", ["openid"]), + authorization_endpoint=oidc_config.get("authorization_endpoint"), + token_endpoint=oidc_config.get("token_endpoint"), + userinfo_endpoint=oidc_config.get("userinfo_endpoint"), + jwks_uri=oidc_config.get("jwks_uri"), + skip_verification=oidc_config.get("skip_verification", False), + user_profile_method=oidc_config.get("user_profile_method", "auto"), + allow_existing_users=oidc_config.get("allow_existing_users", False), + user_mapping_provider_class=user_mapping_provider_class, + user_mapping_provider_config=user_mapping_provider_config, + attribute_requirements=attribute_requirements, + ) + + +@attr.s(slots=True, frozen=True) +class OidcProviderClientSecretJwtKey: + # a pem-encoded signing key + key = attr.ib(type=str) + + # properties to include in the JWT header + jwt_header = attr.ib(type=Mapping[str, str]) + + # properties to include in the JWT payload. + jwt_payload = attr.ib(type=Mapping[str, str]) + + +@attr.s(slots=True, frozen=True) +class OidcProviderConfig: + # a unique identifier for this identity provider. Used in the 'user_external_ids' + # table, as well as the query/path parameter used in the login protocol. + idp_id = attr.ib(type=str) + + # user-facing name for this identity provider. + idp_name = attr.ib(type=str) + + # Optional MXC URI for icon for this IdP. + idp_icon = attr.ib(type=Optional[str]) + + # Optional brand identifier for this IdP. + idp_brand = attr.ib(type=Optional[str]) + + # Optional brand identifier for the unstable API (see MSC2858). + unstable_idp_brand = attr.ib(type=Optional[str]) + + # whether the OIDC discovery mechanism is used to discover endpoints + discover = attr.ib(type=bool) + + # the OIDC issuer. Used to validate tokens and (if discovery is enabled) to + # discover the provider's endpoints. + issuer = attr.ib(type=str) + + # oauth2 client id to use + client_id = attr.ib(type=str) + + # oauth2 client secret to use. if `None`, use client_secret_jwt_key to generate + # a secret. + client_secret = attr.ib(type=Optional[str]) + + # key to use to construct a JWT to use as a client secret. May be `None` if + # `client_secret` is set. + client_secret_jwt_key = attr.ib(type=Optional[OidcProviderClientSecretJwtKey]) + + # auth method to use when exchanging the token. + # Valid values are 'client_secret_basic', 'client_secret_post' and + # 'none'. + client_auth_method = attr.ib(type=str) + + # list of scopes to request + scopes = attr.ib(type=Collection[str]) + + # the oauth2 authorization endpoint. Required if discovery is disabled. + authorization_endpoint = attr.ib(type=Optional[str]) + + # the oauth2 token endpoint. Required if discovery is disabled. + token_endpoint = attr.ib(type=Optional[str]) + + # the OIDC userinfo endpoint. Required if discovery is disabled and the + # "openid" scope is not requested. + userinfo_endpoint = attr.ib(type=Optional[str]) + + # URI where to fetch the JWKS. Required if discovery is disabled and the + # "openid" scope is used. + jwks_uri = attr.ib(type=Optional[str]) + + # Whether to skip metadata verification + skip_verification = attr.ib(type=bool) + + # Whether to fetch the user profile from the userinfo endpoint. Valid + # values are: "auto" or "userinfo_endpoint". + user_profile_method = attr.ib(type=str) + + # whether to allow a user logging in via OIDC to match a pre-existing account + # instead of failing + allow_existing_users = attr.ib(type=bool) + + # the class of the user mapping provider + user_mapping_provider_class = attr.ib(type=Type) + + # the config of the user mapping provider + user_mapping_provider_config = attr.ib() + + # required attributes to require in userinfo to allow login/registration + attribute_requirements = attr.ib(type=List[SsoAttributeRequirement]) diff --git a/synapse/config/oidc_config.py b/synapse/config/oidc_config.py deleted file mode 100644 index 5fb94376fd..0000000000 --- a/synapse/config/oidc_config.py +++ /dev/null @@ -1,590 +0,0 @@ -# Copyright 2020 Quentin Gliech -# Copyright 2020-2021 The Matrix.org Foundation C.I.C. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from collections import Counter -from typing import Iterable, List, Mapping, Optional, Tuple, Type - -import attr - -from synapse.config._util import validate_config -from synapse.config.sso import SsoAttributeRequirement -from synapse.python_dependencies import DependencyException, check_requirements -from synapse.types import Collection, JsonDict -from synapse.util.module_loader import load_module -from synapse.util.stringutils import parse_and_validate_mxc_uri - -from ._base import Config, ConfigError, read_file - -DEFAULT_USER_MAPPING_PROVIDER = "synapse.handlers.oidc_handler.JinjaOidcMappingProvider" - - -class OIDCConfig(Config): - section = "oidc" - - def read_config(self, config, **kwargs): - self.oidc_providers = tuple(_parse_oidc_provider_configs(config)) - if not self.oidc_providers: - return - - try: - check_requirements("oidc") - except DependencyException as e: - raise ConfigError( - e.message # noqa: B306, DependencyException.message is a property - ) from e - - # check we don't have any duplicate idp_ids now. (The SSO handler will also - # check for duplicates when the REST listeners get registered, but that happens - # after synapse has forked so doesn't give nice errors.) - c = Counter([i.idp_id for i in self.oidc_providers]) - for idp_id, count in c.items(): - if count > 1: - raise ConfigError( - "Multiple OIDC providers have the idp_id %r." % idp_id - ) - - public_baseurl = self.public_baseurl - if public_baseurl is None: - raise ConfigError("oidc_config requires a public_baseurl to be set") - self.oidc_callback_url = public_baseurl + "_synapse/client/oidc/callback" - - @property - def oidc_enabled(self) -> bool: - # OIDC is enabled if we have a provider - return bool(self.oidc_providers) - - def generate_config_section(self, config_dir_path, server_name, **kwargs): - return """\ - # List of OpenID Connect (OIDC) / OAuth 2.0 identity providers, for registration - # and login. - # - # Options for each entry include: - # - # idp_id: a unique identifier for this identity provider. Used internally - # by Synapse; should be a single word such as 'github'. - # - # Note that, if this is changed, users authenticating via that provider - # will no longer be recognised as the same user! - # - # (Use "oidc" here if you are migrating from an old "oidc_config" - # configuration.) - # - # idp_name: A user-facing name for this identity provider, which is used to - # offer the user a choice of login mechanisms. - # - # idp_icon: An optional icon for this identity provider, which is presented - # by clients and Synapse's own IdP picker page. If given, must be an - # MXC URI of the format mxc:///. (An easy way to - # obtain such an MXC URI is to upload an image to an (unencrypted) room - # and then copy the "url" from the source of the event.) - # - # idp_brand: An optional brand for this identity provider, allowing clients - # to style the login flow according to the identity provider in question. - # See the spec for possible options here. - # - # discover: set to 'false' to disable the use of the OIDC discovery mechanism - # to discover endpoints. Defaults to true. - # - # issuer: Required. The OIDC issuer. Used to validate tokens and (if discovery - # is enabled) to discover the provider's endpoints. - # - # client_id: Required. oauth2 client id to use. - # - # client_secret: oauth2 client secret to use. May be omitted if - # client_secret_jwt_key is given, or if client_auth_method is 'none'. - # - # client_secret_jwt_key: Alternative to client_secret: details of a key used - # to create a JSON Web Token to be used as an OAuth2 client secret. If - # given, must be a dictionary with the following properties: - # - # key: a pem-encoded signing key. Must be a suitable key for the - # algorithm specified. Required unless 'key_file' is given. - # - # key_file: the path to file containing a pem-encoded signing key file. - # Required unless 'key' is given. - # - # jwt_header: a dictionary giving properties to include in the JWT - # header. Must include the key 'alg', giving the algorithm used to - # sign the JWT, such as "ES256", using the JWA identifiers in - # RFC7518. - # - # jwt_payload: an optional dictionary giving properties to include in - # the JWT payload. Normally this should include an 'iss' key. - # - # client_auth_method: auth method to use when exchanging the token. Valid - # values are 'client_secret_basic' (default), 'client_secret_post' and - # 'none'. - # - # scopes: list of scopes to request. This should normally include the "openid" - # scope. Defaults to ["openid"]. - # - # authorization_endpoint: the oauth2 authorization endpoint. Required if - # provider discovery is disabled. - # - # token_endpoint: the oauth2 token endpoint. Required if provider discovery is - # disabled. - # - # userinfo_endpoint: the OIDC userinfo endpoint. Required if discovery is - # disabled and the 'openid' scope is not requested. - # - # jwks_uri: URI where to fetch the JWKS. Required if discovery is disabled and - # the 'openid' scope is used. - # - # skip_verification: set to 'true' to skip metadata verification. Use this if - # you are connecting to a provider that is not OpenID Connect compliant. - # Defaults to false. Avoid this in production. - # - # user_profile_method: Whether to fetch the user profile from the userinfo - # endpoint. Valid values are: 'auto' or 'userinfo_endpoint'. - # - # Defaults to 'auto', which fetches the userinfo endpoint if 'openid' is - # included in 'scopes'. Set to 'userinfo_endpoint' to always fetch the - # userinfo endpoint. - # - # allow_existing_users: set to 'true' to allow a user logging in via OIDC to - # match a pre-existing account instead of failing. This could be used if - # switching from password logins to OIDC. Defaults to false. - # - # user_mapping_provider: Configuration for how attributes returned from a OIDC - # provider are mapped onto a matrix user. This setting has the following - # sub-properties: - # - # module: The class name of a custom mapping module. Default is - # {mapping_provider!r}. - # See https://github.com/matrix-org/synapse/blob/master/docs/sso_mapping_providers.md#openid-mapping-providers - # for information on implementing a custom mapping provider. - # - # config: Configuration for the mapping provider module. This section will - # be passed as a Python dictionary to the user mapping provider - # module's `parse_config` method. - # - # For the default provider, the following settings are available: - # - # subject_claim: name of the claim containing a unique identifier - # for the user. Defaults to 'sub', which OpenID Connect - # compliant providers should provide. - # - # localpart_template: Jinja2 template for the localpart of the MXID. - # If this is not set, the user will be prompted to choose their - # own username (see 'sso_auth_account_details.html' in the 'sso' - # section of this file). - # - # display_name_template: Jinja2 template for the display name to set - # on first login. If unset, no displayname will be set. - # - # email_template: Jinja2 template for the email address of the user. - # If unset, no email address will be added to the account. - # - # extra_attributes: a map of Jinja2 templates for extra attributes - # to send back to the client during login. - # Note that these are non-standard and clients will ignore them - # without modifications. - # - # When rendering, the Jinja2 templates are given a 'user' variable, - # which is set to the claims returned by the UserInfo Endpoint and/or - # in the ID Token. - # - # It is possible to configure Synapse to only allow logins if certain attributes - # match particular values in the OIDC userinfo. The requirements can be listed under - # `attribute_requirements` as shown below. All of the listed attributes must - # match for the login to be permitted. Additional attributes can be added to - # userinfo by expanding the `scopes` section of the OIDC config to retrieve - # additional information from the OIDC provider. - # - # If the OIDC claim is a list, then the attribute must match any value in the list. - # Otherwise, it must exactly match the value of the claim. Using the example - # below, the `family_name` claim MUST be "Stephensson", but the `groups` - # claim MUST contain "admin". - # - # attribute_requirements: - # - attribute: family_name - # value: "Stephensson" - # - attribute: groups - # value: "admin" - # - # See https://github.com/matrix-org/synapse/blob/master/docs/openid.md - # for information on how to configure these options. - # - # For backwards compatibility, it is also possible to configure a single OIDC - # provider via an 'oidc_config' setting. This is now deprecated and admins are - # advised to migrate to the 'oidc_providers' format. (When doing that migration, - # use 'oidc' for the idp_id to ensure that existing users continue to be - # recognised.) - # - oidc_providers: - # Generic example - # - #- idp_id: my_idp - # idp_name: "My OpenID provider" - # idp_icon: "mxc://example.com/mediaid" - # discover: false - # issuer: "https://accounts.example.com/" - # client_id: "provided-by-your-issuer" - # client_secret: "provided-by-your-issuer" - # client_auth_method: client_secret_post - # scopes: ["openid", "profile"] - # authorization_endpoint: "https://accounts.example.com/oauth2/auth" - # token_endpoint: "https://accounts.example.com/oauth2/token" - # userinfo_endpoint: "https://accounts.example.com/userinfo" - # jwks_uri: "https://accounts.example.com/.well-known/jwks.json" - # skip_verification: true - # user_mapping_provider: - # config: - # subject_claim: "id" - # localpart_template: "{{{{ user.login }}}}" - # display_name_template: "{{{{ user.name }}}}" - # email_template: "{{{{ user.email }}}}" - # attribute_requirements: - # - attribute: userGroup - # value: "synapseUsers" - """.format( - mapping_provider=DEFAULT_USER_MAPPING_PROVIDER - ) - - -# jsonschema definition of the configuration settings for an oidc identity provider -OIDC_PROVIDER_CONFIG_SCHEMA = { - "type": "object", - "required": ["issuer", "client_id"], - "properties": { - "idp_id": { - "type": "string", - "minLength": 1, - # MSC2858 allows a maxlen of 255, but we prefix with "oidc-" - "maxLength": 250, - "pattern": "^[A-Za-z0-9._~-]+$", - }, - "idp_name": {"type": "string"}, - "idp_icon": {"type": "string"}, - "idp_brand": { - "type": "string", - "minLength": 1, - "maxLength": 255, - "pattern": "^[a-z][a-z0-9_.-]*$", - }, - "idp_unstable_brand": { - "type": "string", - "minLength": 1, - "maxLength": 255, - "pattern": "^[a-z][a-z0-9_.-]*$", - }, - "discover": {"type": "boolean"}, - "issuer": {"type": "string"}, - "client_id": {"type": "string"}, - "client_secret": {"type": "string"}, - "client_secret_jwt_key": { - "type": "object", - "required": ["jwt_header"], - "oneOf": [ - {"required": ["key"]}, - {"required": ["key_file"]}, - ], - "properties": { - "key": {"type": "string"}, - "key_file": {"type": "string"}, - "jwt_header": { - "type": "object", - "required": ["alg"], - "properties": { - "alg": {"type": "string"}, - }, - "additionalProperties": {"type": "string"}, - }, - "jwt_payload": { - "type": "object", - "additionalProperties": {"type": "string"}, - }, - }, - }, - "client_auth_method": { - "type": "string", - # the following list is the same as the keys of - # authlib.oauth2.auth.ClientAuth.DEFAULT_AUTH_METHODS. We inline it - # to avoid importing authlib here. - "enum": ["client_secret_basic", "client_secret_post", "none"], - }, - "scopes": {"type": "array", "items": {"type": "string"}}, - "authorization_endpoint": {"type": "string"}, - "token_endpoint": {"type": "string"}, - "userinfo_endpoint": {"type": "string"}, - "jwks_uri": {"type": "string"}, - "skip_verification": {"type": "boolean"}, - "user_profile_method": { - "type": "string", - "enum": ["auto", "userinfo_endpoint"], - }, - "allow_existing_users": {"type": "boolean"}, - "user_mapping_provider": {"type": ["object", "null"]}, - "attribute_requirements": { - "type": "array", - "items": SsoAttributeRequirement.JSON_SCHEMA, - }, - }, -} - -# the same as OIDC_PROVIDER_CONFIG_SCHEMA, but with compulsory idp_id and idp_name -OIDC_PROVIDER_CONFIG_WITH_ID_SCHEMA = { - "allOf": [OIDC_PROVIDER_CONFIG_SCHEMA, {"required": ["idp_id", "idp_name"]}] -} - - -# the `oidc_providers` list can either be None (as it is in the default config), or -# a list of provider configs, each of which requires an explicit ID and name. -OIDC_PROVIDER_LIST_SCHEMA = { - "oneOf": [ - {"type": "null"}, - {"type": "array", "items": OIDC_PROVIDER_CONFIG_WITH_ID_SCHEMA}, - ] -} - -# the `oidc_config` setting can either be None (which it used to be in the default -# config), or an object. If an object, it is ignored unless it has an "enabled: True" -# property. -# -# It's *possible* to represent this with jsonschema, but the resultant errors aren't -# particularly clear, so we just check for either an object or a null here, and do -# additional checks in the code. -OIDC_CONFIG_SCHEMA = {"oneOf": [{"type": "null"}, {"type": "object"}]} - -# the top-level schema can contain an "oidc_config" and/or an "oidc_providers". -MAIN_CONFIG_SCHEMA = { - "type": "object", - "properties": { - "oidc_config": OIDC_CONFIG_SCHEMA, - "oidc_providers": OIDC_PROVIDER_LIST_SCHEMA, - }, -} - - -def _parse_oidc_provider_configs(config: JsonDict) -> Iterable["OidcProviderConfig"]: - """extract and parse the OIDC provider configs from the config dict - - The configuration may contain either a single `oidc_config` object with an - `enabled: True` property, or a list of provider configurations under - `oidc_providers`, *or both*. - - Returns a generator which yields the OidcProviderConfig objects - """ - validate_config(MAIN_CONFIG_SCHEMA, config, ()) - - for i, p in enumerate(config.get("oidc_providers") or []): - yield _parse_oidc_config_dict(p, ("oidc_providers", "" % (i,))) - - # for backwards-compatibility, it is also possible to provide a single "oidc_config" - # object with an "enabled: True" property. - oidc_config = config.get("oidc_config") - if oidc_config and oidc_config.get("enabled", False): - # MAIN_CONFIG_SCHEMA checks that `oidc_config` is an object, but not that - # it matches OIDC_PROVIDER_CONFIG_SCHEMA (see the comments on OIDC_CONFIG_SCHEMA - # above), so now we need to validate it. - validate_config(OIDC_PROVIDER_CONFIG_SCHEMA, oidc_config, ("oidc_config",)) - yield _parse_oidc_config_dict(oidc_config, ("oidc_config",)) - - -def _parse_oidc_config_dict( - oidc_config: JsonDict, config_path: Tuple[str, ...] -) -> "OidcProviderConfig": - """Take the configuration dict and parse it into an OidcProviderConfig - - Raises: - ConfigError if the configuration is malformed. - """ - ump_config = oidc_config.get("user_mapping_provider", {}) - ump_config.setdefault("module", DEFAULT_USER_MAPPING_PROVIDER) - ump_config.setdefault("config", {}) - - ( - user_mapping_provider_class, - user_mapping_provider_config, - ) = load_module(ump_config, config_path + ("user_mapping_provider",)) - - # Ensure loaded user mapping module has defined all necessary methods - required_methods = [ - "get_remote_user_id", - "map_user_attributes", - ] - missing_methods = [ - method - for method in required_methods - if not hasattr(user_mapping_provider_class, method) - ] - if missing_methods: - raise ConfigError( - "Class %s is missing required " - "methods: %s" - % ( - user_mapping_provider_class, - ", ".join(missing_methods), - ), - config_path + ("user_mapping_provider", "module"), - ) - - idp_id = oidc_config.get("idp_id", "oidc") - - # prefix the given IDP with a prefix specific to the SSO mechanism, to avoid - # clashes with other mechs (such as SAML, CAS). - # - # We allow "oidc" as an exception so that people migrating from old-style - # "oidc_config" format (which has long used "oidc" as its idp_id) can migrate to - # a new-style "oidc_providers" entry without changing the idp_id for their provider - # (and thereby invalidating their user_external_ids data). - - if idp_id != "oidc": - idp_id = "oidc-" + idp_id - - # MSC2858 also specifies that the idp_icon must be a valid MXC uri - idp_icon = oidc_config.get("idp_icon") - if idp_icon is not None: - try: - parse_and_validate_mxc_uri(idp_icon) - except ValueError as e: - raise ConfigError( - "idp_icon must be a valid MXC URI", config_path + ("idp_icon",) - ) from e - - client_secret_jwt_key_config = oidc_config.get("client_secret_jwt_key") - client_secret_jwt_key = None # type: Optional[OidcProviderClientSecretJwtKey] - if client_secret_jwt_key_config is not None: - keyfile = client_secret_jwt_key_config.get("key_file") - if keyfile: - key = read_file(keyfile, config_path + ("client_secret_jwt_key",)) - else: - key = client_secret_jwt_key_config["key"] - client_secret_jwt_key = OidcProviderClientSecretJwtKey( - key=key, - jwt_header=client_secret_jwt_key_config["jwt_header"], - jwt_payload=client_secret_jwt_key_config.get("jwt_payload", {}), - ) - # parse attribute_requirements from config (list of dicts) into a list of SsoAttributeRequirement - attribute_requirements = [ - SsoAttributeRequirement(**x) - for x in oidc_config.get("attribute_requirements", []) - ] - - return OidcProviderConfig( - idp_id=idp_id, - idp_name=oidc_config.get("idp_name", "OIDC"), - idp_icon=idp_icon, - idp_brand=oidc_config.get("idp_brand"), - unstable_idp_brand=oidc_config.get("unstable_idp_brand"), - discover=oidc_config.get("discover", True), - issuer=oidc_config["issuer"], - client_id=oidc_config["client_id"], - client_secret=oidc_config.get("client_secret"), - client_secret_jwt_key=client_secret_jwt_key, - client_auth_method=oidc_config.get("client_auth_method", "client_secret_basic"), - scopes=oidc_config.get("scopes", ["openid"]), - authorization_endpoint=oidc_config.get("authorization_endpoint"), - token_endpoint=oidc_config.get("token_endpoint"), - userinfo_endpoint=oidc_config.get("userinfo_endpoint"), - jwks_uri=oidc_config.get("jwks_uri"), - skip_verification=oidc_config.get("skip_verification", False), - user_profile_method=oidc_config.get("user_profile_method", "auto"), - allow_existing_users=oidc_config.get("allow_existing_users", False), - user_mapping_provider_class=user_mapping_provider_class, - user_mapping_provider_config=user_mapping_provider_config, - attribute_requirements=attribute_requirements, - ) - - -@attr.s(slots=True, frozen=True) -class OidcProviderClientSecretJwtKey: - # a pem-encoded signing key - key = attr.ib(type=str) - - # properties to include in the JWT header - jwt_header = attr.ib(type=Mapping[str, str]) - - # properties to include in the JWT payload. - jwt_payload = attr.ib(type=Mapping[str, str]) - - -@attr.s(slots=True, frozen=True) -class OidcProviderConfig: - # a unique identifier for this identity provider. Used in the 'user_external_ids' - # table, as well as the query/path parameter used in the login protocol. - idp_id = attr.ib(type=str) - - # user-facing name for this identity provider. - idp_name = attr.ib(type=str) - - # Optional MXC URI for icon for this IdP. - idp_icon = attr.ib(type=Optional[str]) - - # Optional brand identifier for this IdP. - idp_brand = attr.ib(type=Optional[str]) - - # Optional brand identifier for the unstable API (see MSC2858). - unstable_idp_brand = attr.ib(type=Optional[str]) - - # whether the OIDC discovery mechanism is used to discover endpoints - discover = attr.ib(type=bool) - - # the OIDC issuer. Used to validate tokens and (if discovery is enabled) to - # discover the provider's endpoints. - issuer = attr.ib(type=str) - - # oauth2 client id to use - client_id = attr.ib(type=str) - - # oauth2 client secret to use. if `None`, use client_secret_jwt_key to generate - # a secret. - client_secret = attr.ib(type=Optional[str]) - - # key to use to construct a JWT to use as a client secret. May be `None` if - # `client_secret` is set. - client_secret_jwt_key = attr.ib(type=Optional[OidcProviderClientSecretJwtKey]) - - # auth method to use when exchanging the token. - # Valid values are 'client_secret_basic', 'client_secret_post' and - # 'none'. - client_auth_method = attr.ib(type=str) - - # list of scopes to request - scopes = attr.ib(type=Collection[str]) - - # the oauth2 authorization endpoint. Required if discovery is disabled. - authorization_endpoint = attr.ib(type=Optional[str]) - - # the oauth2 token endpoint. Required if discovery is disabled. - token_endpoint = attr.ib(type=Optional[str]) - - # the OIDC userinfo endpoint. Required if discovery is disabled and the - # "openid" scope is not requested. - userinfo_endpoint = attr.ib(type=Optional[str]) - - # URI where to fetch the JWKS. Required if discovery is disabled and the - # "openid" scope is used. - jwks_uri = attr.ib(type=Optional[str]) - - # Whether to skip metadata verification - skip_verification = attr.ib(type=bool) - - # Whether to fetch the user profile from the userinfo endpoint. Valid - # values are: "auto" or "userinfo_endpoint". - user_profile_method = attr.ib(type=str) - - # whether to allow a user logging in via OIDC to match a pre-existing account - # instead of failing - allow_existing_users = attr.ib(type=bool) - - # the class of the user mapping provider - user_mapping_provider_class = attr.ib(type=Type) - - # the config of the user mapping provider - user_mapping_provider_config = attr.ib() - - # required attributes to require in userinfo to allow login/registration - attribute_requirements = attr.ib(type=List[SsoAttributeRequirement]) diff --git a/synapse/config/saml2.py b/synapse/config/saml2.py new file mode 100644 index 0000000000..3d1218c8d1 --- /dev/null +++ b/synapse/config/saml2.py @@ -0,0 +1,420 @@ +# Copyright 2018 New Vector Ltd +# Copyright 2019 The Matrix.org Foundation C.I.C. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import logging +from typing import Any, List + +from synapse.config.sso import SsoAttributeRequirement +from synapse.python_dependencies import DependencyException, check_requirements +from synapse.util.module_loader import load_module, load_python_module + +from ._base import Config, ConfigError +from ._util import validate_config + +logger = logging.getLogger(__name__) + +DEFAULT_USER_MAPPING_PROVIDER = "synapse.handlers.saml.DefaultSamlMappingProvider" +# The module that DefaultSamlMappingProvider is in was renamed, we want to +# transparently handle both the same. +LEGACY_USER_MAPPING_PROVIDER = ( + "synapse.handlers.saml_handler.DefaultSamlMappingProvider" +) + + +def _dict_merge(merge_dict, into_dict): + """Do a deep merge of two dicts + + Recursively merges `merge_dict` into `into_dict`: + * For keys where both `merge_dict` and `into_dict` have a dict value, the values + are recursively merged + * For all other keys, the values in `into_dict` (if any) are overwritten with + the value from `merge_dict`. + + Args: + merge_dict (dict): dict to merge + into_dict (dict): target dict + """ + for k, v in merge_dict.items(): + if k not in into_dict: + into_dict[k] = v + continue + + current_val = into_dict[k] + + if isinstance(v, dict) and isinstance(current_val, dict): + _dict_merge(v, current_val) + continue + + # otherwise we just overwrite + into_dict[k] = v + + +class SAML2Config(Config): + section = "saml2" + + def read_config(self, config, **kwargs): + self.saml2_enabled = False + + saml2_config = config.get("saml2_config") + + if not saml2_config or not saml2_config.get("enabled", True): + return + + if not saml2_config.get("sp_config") and not saml2_config.get("config_path"): + return + + try: + check_requirements("saml2") + except DependencyException as e: + raise ConfigError( + e.message # noqa: B306, DependencyException.message is a property + ) + + self.saml2_enabled = True + + attribute_requirements = saml2_config.get("attribute_requirements") or [] + self.attribute_requirements = _parse_attribute_requirements_def( + attribute_requirements + ) + + self.saml2_grandfathered_mxid_source_attribute = saml2_config.get( + "grandfathered_mxid_source_attribute", "uid" + ) + + self.saml2_idp_entityid = saml2_config.get("idp_entityid", None) + + # user_mapping_provider may be None if the key is present but has no value + ump_dict = saml2_config.get("user_mapping_provider") or {} + + # Use the default user mapping provider if not set + ump_dict.setdefault("module", DEFAULT_USER_MAPPING_PROVIDER) + if ump_dict.get("module") == LEGACY_USER_MAPPING_PROVIDER: + ump_dict["module"] = DEFAULT_USER_MAPPING_PROVIDER + + # Ensure a config is present + ump_dict["config"] = ump_dict.get("config") or {} + + if ump_dict["module"] == DEFAULT_USER_MAPPING_PROVIDER: + # Load deprecated options for use by the default module + old_mxid_source_attribute = saml2_config.get("mxid_source_attribute") + if old_mxid_source_attribute: + logger.warning( + "The config option saml2_config.mxid_source_attribute is deprecated. " + "Please use saml2_config.user_mapping_provider.config" + ".mxid_source_attribute instead." + ) + ump_dict["config"]["mxid_source_attribute"] = old_mxid_source_attribute + + old_mxid_mapping = saml2_config.get("mxid_mapping") + if old_mxid_mapping: + logger.warning( + "The config option saml2_config.mxid_mapping is deprecated. Please " + "use saml2_config.user_mapping_provider.config.mxid_mapping instead." + ) + ump_dict["config"]["mxid_mapping"] = old_mxid_mapping + + # Retrieve an instance of the module's class + # Pass the config dictionary to the module for processing + ( + self.saml2_user_mapping_provider_class, + self.saml2_user_mapping_provider_config, + ) = load_module(ump_dict, ("saml2_config", "user_mapping_provider")) + + # Ensure loaded user mapping module has defined all necessary methods + # Note parse_config() is already checked during the call to load_module + required_methods = [ + "get_saml_attributes", + "saml_response_to_user_attributes", + "get_remote_user_id", + ] + missing_methods = [ + method + for method in required_methods + if not hasattr(self.saml2_user_mapping_provider_class, method) + ] + if missing_methods: + raise ConfigError( + "Class specified by saml2_config." + "user_mapping_provider.module is missing required " + "methods: %s" % (", ".join(missing_methods),) + ) + + # Get the desired saml auth response attributes from the module + saml2_config_dict = self._default_saml_config_dict( + *self.saml2_user_mapping_provider_class.get_saml_attributes( + self.saml2_user_mapping_provider_config + ) + ) + _dict_merge( + merge_dict=saml2_config.get("sp_config", {}), into_dict=saml2_config_dict + ) + + config_path = saml2_config.get("config_path", None) + if config_path is not None: + mod = load_python_module(config_path) + _dict_merge(merge_dict=mod.CONFIG, into_dict=saml2_config_dict) + + import saml2.config + + self.saml2_sp_config = saml2.config.SPConfig() + self.saml2_sp_config.load(saml2_config_dict) + + # session lifetime: in milliseconds + self.saml2_session_lifetime = self.parse_duration( + saml2_config.get("saml_session_lifetime", "15m") + ) + + def _default_saml_config_dict( + self, required_attributes: set, optional_attributes: set + ): + """Generate a configuration dictionary with required and optional attributes that + will be needed to process new user registration + + Args: + required_attributes: SAML auth response attributes that are + necessary to function + optional_attributes: SAML auth response attributes that can be used to add + additional information to Synapse user accounts, but are not required + + Returns: + dict: A SAML configuration dictionary + """ + import saml2 + + public_baseurl = self.public_baseurl + if public_baseurl is None: + raise ConfigError("saml2_config requires a public_baseurl to be set") + + if self.saml2_grandfathered_mxid_source_attribute: + optional_attributes.add(self.saml2_grandfathered_mxid_source_attribute) + optional_attributes -= required_attributes + + metadata_url = public_baseurl + "_synapse/client/saml2/metadata.xml" + response_url = public_baseurl + "_synapse/client/saml2/authn_response" + return { + "entityid": metadata_url, + "service": { + "sp": { + "endpoints": { + "assertion_consumer_service": [ + (response_url, saml2.BINDING_HTTP_POST) + ] + }, + "required_attributes": list(required_attributes), + "optional_attributes": list(optional_attributes), + # "name_id_format": saml2.saml.NAMEID_FORMAT_PERSISTENT, + } + }, + } + + def generate_config_section(self, config_dir_path, server_name, **kwargs): + return """\ + ## Single sign-on integration ## + + # The following settings can be used to make Synapse use a single sign-on + # provider for authentication, instead of its internal password database. + # + # You will probably also want to set the following options to `false` to + # disable the regular login/registration flows: + # * enable_registration + # * password_config.enabled + # + # You will also want to investigate the settings under the "sso" configuration + # section below. + + # Enable SAML2 for registration and login. Uses pysaml2. + # + # At least one of `sp_config` or `config_path` must be set in this section to + # enable SAML login. + # + # Once SAML support is enabled, a metadata file will be exposed at + # https://:/_synapse/client/saml2/metadata.xml, which you may be able to + # use to configure your SAML IdP with. Alternatively, you can manually configure + # the IdP to use an ACS location of + # https://:/_synapse/client/saml2/authn_response. + # + saml2_config: + # `sp_config` is the configuration for the pysaml2 Service Provider. + # See pysaml2 docs for format of config. + # + # Default values will be used for the 'entityid' and 'service' settings, + # so it is not normally necessary to specify them unless you need to + # override them. + # + sp_config: + # Point this to the IdP's metadata. You must provide either a local + # file via the `local` attribute or (preferably) a URL via the + # `remote` attribute. + # + #metadata: + # local: ["saml2/idp.xml"] + # remote: + # - url: https://our_idp/metadata.xml + + # Allowed clock difference in seconds between the homeserver and IdP. + # + # Uncomment the below to increase the accepted time difference from 0 to 3 seconds. + # + #accepted_time_diff: 3 + + # By default, the user has to go to our login page first. If you'd like + # to allow IdP-initiated login, set 'allow_unsolicited: true' in a + # 'service.sp' section: + # + #service: + # sp: + # allow_unsolicited: true + + # The examples below are just used to generate our metadata xml, and you + # may well not need them, depending on your setup. Alternatively you + # may need a whole lot more detail - see the pysaml2 docs! + + #description: ["My awesome SP", "en"] + #name: ["Test SP", "en"] + + #ui_info: + # display_name: + # - lang: en + # text: "Display Name is the descriptive name of your service." + # description: + # - lang: en + # text: "Description should be a short paragraph explaining the purpose of the service." + # information_url: + # - lang: en + # text: "https://example.com/terms-of-service" + # privacy_statement_url: + # - lang: en + # text: "https://example.com/privacy-policy" + # keywords: + # - lang: en + # text: ["Matrix", "Element"] + # logo: + # - lang: en + # text: "https://example.com/logo.svg" + # width: "200" + # height: "80" + + #organization: + # name: Example com + # display_name: + # - ["Example co", "en"] + # url: "http://example.com" + + #contact_person: + # - given_name: Bob + # sur_name: "the Sysadmin" + # email_address": ["admin@example.com"] + # contact_type": technical + + # Instead of putting the config inline as above, you can specify a + # separate pysaml2 configuration file: + # + #config_path: "%(config_dir_path)s/sp_conf.py" + + # The lifetime of a SAML session. This defines how long a user has to + # complete the authentication process, if allow_unsolicited is unset. + # The default is 15 minutes. + # + #saml_session_lifetime: 5m + + # An external module can be provided here as a custom solution to + # mapping attributes returned from a saml provider onto a matrix user. + # + user_mapping_provider: + # The custom module's class. Uncomment to use a custom module. + # + #module: mapping_provider.SamlMappingProvider + + # Custom configuration values for the module. Below options are + # intended for the built-in provider, they should be changed if + # using a custom module. This section will be passed as a Python + # dictionary to the module's `parse_config` method. + # + config: + # The SAML attribute (after mapping via the attribute maps) to use + # to derive the Matrix ID from. 'uid' by default. + # + # Note: This used to be configured by the + # saml2_config.mxid_source_attribute option. If that is still + # defined, its value will be used instead. + # + #mxid_source_attribute: displayName + + # The mapping system to use for mapping the saml attribute onto a + # matrix ID. + # + # Options include: + # * 'hexencode' (which maps unpermitted characters to '=xx') + # * 'dotreplace' (which replaces unpermitted characters with + # '.'). + # The default is 'hexencode'. + # + # Note: This used to be configured by the + # saml2_config.mxid_mapping option. If that is still defined, its + # value will be used instead. + # + #mxid_mapping: dotreplace + + # In previous versions of synapse, the mapping from SAML attribute to + # MXID was always calculated dynamically rather than stored in a + # table. For backwards- compatibility, we will look for user_ids + # matching such a pattern before creating a new account. + # + # This setting controls the SAML attribute which will be used for this + # backwards-compatibility lookup. Typically it should be 'uid', but if + # the attribute maps are changed, it may be necessary to change it. + # + # The default is 'uid'. + # + #grandfathered_mxid_source_attribute: upn + + # It is possible to configure Synapse to only allow logins if SAML attributes + # match particular values. The requirements can be listed under + # `attribute_requirements` as shown below. All of the listed attributes must + # match for the login to be permitted. + # + #attribute_requirements: + # - attribute: userGroup + # value: "staff" + # - attribute: department + # value: "sales" + + # If the metadata XML contains multiple IdP entities then the `idp_entityid` + # option must be set to the entity to redirect users to. + # + # Most deployments only have a single IdP entity and so should omit this + # option. + # + #idp_entityid: 'https://our_idp/entityid' + """ % { + "config_dir_path": config_dir_path + } + + +ATTRIBUTE_REQUIREMENTS_SCHEMA = { + "type": "array", + "items": SsoAttributeRequirement.JSON_SCHEMA, +} + + +def _parse_attribute_requirements_def( + attribute_requirements: Any, +) -> List[SsoAttributeRequirement]: + validate_config( + ATTRIBUTE_REQUIREMENTS_SCHEMA, + attribute_requirements, + config_path=("saml2_config", "attribute_requirements"), + ) + return [SsoAttributeRequirement(**x) for x in attribute_requirements] diff --git a/synapse/config/saml2_config.py b/synapse/config/saml2_config.py deleted file mode 100644 index 55a7838b10..0000000000 --- a/synapse/config/saml2_config.py +++ /dev/null @@ -1,415 +0,0 @@ -# Copyright 2018 New Vector Ltd -# Copyright 2019 The Matrix.org Foundation C.I.C. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import logging -from typing import Any, List - -from synapse.config.sso import SsoAttributeRequirement -from synapse.python_dependencies import DependencyException, check_requirements -from synapse.util.module_loader import load_module, load_python_module - -from ._base import Config, ConfigError -from ._util import validate_config - -logger = logging.getLogger(__name__) - -DEFAULT_USER_MAPPING_PROVIDER = ( - "synapse.handlers.saml_handler.DefaultSamlMappingProvider" -) - - -def _dict_merge(merge_dict, into_dict): - """Do a deep merge of two dicts - - Recursively merges `merge_dict` into `into_dict`: - * For keys where both `merge_dict` and `into_dict` have a dict value, the values - are recursively merged - * For all other keys, the values in `into_dict` (if any) are overwritten with - the value from `merge_dict`. - - Args: - merge_dict (dict): dict to merge - into_dict (dict): target dict - """ - for k, v in merge_dict.items(): - if k not in into_dict: - into_dict[k] = v - continue - - current_val = into_dict[k] - - if isinstance(v, dict) and isinstance(current_val, dict): - _dict_merge(v, current_val) - continue - - # otherwise we just overwrite - into_dict[k] = v - - -class SAML2Config(Config): - section = "saml2" - - def read_config(self, config, **kwargs): - self.saml2_enabled = False - - saml2_config = config.get("saml2_config") - - if not saml2_config or not saml2_config.get("enabled", True): - return - - if not saml2_config.get("sp_config") and not saml2_config.get("config_path"): - return - - try: - check_requirements("saml2") - except DependencyException as e: - raise ConfigError( - e.message # noqa: B306, DependencyException.message is a property - ) - - self.saml2_enabled = True - - attribute_requirements = saml2_config.get("attribute_requirements") or [] - self.attribute_requirements = _parse_attribute_requirements_def( - attribute_requirements - ) - - self.saml2_grandfathered_mxid_source_attribute = saml2_config.get( - "grandfathered_mxid_source_attribute", "uid" - ) - - self.saml2_idp_entityid = saml2_config.get("idp_entityid", None) - - # user_mapping_provider may be None if the key is present but has no value - ump_dict = saml2_config.get("user_mapping_provider") or {} - - # Use the default user mapping provider if not set - ump_dict.setdefault("module", DEFAULT_USER_MAPPING_PROVIDER) - - # Ensure a config is present - ump_dict["config"] = ump_dict.get("config") or {} - - if ump_dict["module"] == DEFAULT_USER_MAPPING_PROVIDER: - # Load deprecated options for use by the default module - old_mxid_source_attribute = saml2_config.get("mxid_source_attribute") - if old_mxid_source_attribute: - logger.warning( - "The config option saml2_config.mxid_source_attribute is deprecated. " - "Please use saml2_config.user_mapping_provider.config" - ".mxid_source_attribute instead." - ) - ump_dict["config"]["mxid_source_attribute"] = old_mxid_source_attribute - - old_mxid_mapping = saml2_config.get("mxid_mapping") - if old_mxid_mapping: - logger.warning( - "The config option saml2_config.mxid_mapping is deprecated. Please " - "use saml2_config.user_mapping_provider.config.mxid_mapping instead." - ) - ump_dict["config"]["mxid_mapping"] = old_mxid_mapping - - # Retrieve an instance of the module's class - # Pass the config dictionary to the module for processing - ( - self.saml2_user_mapping_provider_class, - self.saml2_user_mapping_provider_config, - ) = load_module(ump_dict, ("saml2_config", "user_mapping_provider")) - - # Ensure loaded user mapping module has defined all necessary methods - # Note parse_config() is already checked during the call to load_module - required_methods = [ - "get_saml_attributes", - "saml_response_to_user_attributes", - "get_remote_user_id", - ] - missing_methods = [ - method - for method in required_methods - if not hasattr(self.saml2_user_mapping_provider_class, method) - ] - if missing_methods: - raise ConfigError( - "Class specified by saml2_config." - "user_mapping_provider.module is missing required " - "methods: %s" % (", ".join(missing_methods),) - ) - - # Get the desired saml auth response attributes from the module - saml2_config_dict = self._default_saml_config_dict( - *self.saml2_user_mapping_provider_class.get_saml_attributes( - self.saml2_user_mapping_provider_config - ) - ) - _dict_merge( - merge_dict=saml2_config.get("sp_config", {}), into_dict=saml2_config_dict - ) - - config_path = saml2_config.get("config_path", None) - if config_path is not None: - mod = load_python_module(config_path) - _dict_merge(merge_dict=mod.CONFIG, into_dict=saml2_config_dict) - - import saml2.config - - self.saml2_sp_config = saml2.config.SPConfig() - self.saml2_sp_config.load(saml2_config_dict) - - # session lifetime: in milliseconds - self.saml2_session_lifetime = self.parse_duration( - saml2_config.get("saml_session_lifetime", "15m") - ) - - def _default_saml_config_dict( - self, required_attributes: set, optional_attributes: set - ): - """Generate a configuration dictionary with required and optional attributes that - will be needed to process new user registration - - Args: - required_attributes: SAML auth response attributes that are - necessary to function - optional_attributes: SAML auth response attributes that can be used to add - additional information to Synapse user accounts, but are not required - - Returns: - dict: A SAML configuration dictionary - """ - import saml2 - - public_baseurl = self.public_baseurl - if public_baseurl is None: - raise ConfigError("saml2_config requires a public_baseurl to be set") - - if self.saml2_grandfathered_mxid_source_attribute: - optional_attributes.add(self.saml2_grandfathered_mxid_source_attribute) - optional_attributes -= required_attributes - - metadata_url = public_baseurl + "_synapse/client/saml2/metadata.xml" - response_url = public_baseurl + "_synapse/client/saml2/authn_response" - return { - "entityid": metadata_url, - "service": { - "sp": { - "endpoints": { - "assertion_consumer_service": [ - (response_url, saml2.BINDING_HTTP_POST) - ] - }, - "required_attributes": list(required_attributes), - "optional_attributes": list(optional_attributes), - # "name_id_format": saml2.saml.NAMEID_FORMAT_PERSISTENT, - } - }, - } - - def generate_config_section(self, config_dir_path, server_name, **kwargs): - return """\ - ## Single sign-on integration ## - - # The following settings can be used to make Synapse use a single sign-on - # provider for authentication, instead of its internal password database. - # - # You will probably also want to set the following options to `false` to - # disable the regular login/registration flows: - # * enable_registration - # * password_config.enabled - # - # You will also want to investigate the settings under the "sso" configuration - # section below. - - # Enable SAML2 for registration and login. Uses pysaml2. - # - # At least one of `sp_config` or `config_path` must be set in this section to - # enable SAML login. - # - # Once SAML support is enabled, a metadata file will be exposed at - # https://:/_synapse/client/saml2/metadata.xml, which you may be able to - # use to configure your SAML IdP with. Alternatively, you can manually configure - # the IdP to use an ACS location of - # https://:/_synapse/client/saml2/authn_response. - # - saml2_config: - # `sp_config` is the configuration for the pysaml2 Service Provider. - # See pysaml2 docs for format of config. - # - # Default values will be used for the 'entityid' and 'service' settings, - # so it is not normally necessary to specify them unless you need to - # override them. - # - sp_config: - # Point this to the IdP's metadata. You must provide either a local - # file via the `local` attribute or (preferably) a URL via the - # `remote` attribute. - # - #metadata: - # local: ["saml2/idp.xml"] - # remote: - # - url: https://our_idp/metadata.xml - - # Allowed clock difference in seconds between the homeserver and IdP. - # - # Uncomment the below to increase the accepted time difference from 0 to 3 seconds. - # - #accepted_time_diff: 3 - - # By default, the user has to go to our login page first. If you'd like - # to allow IdP-initiated login, set 'allow_unsolicited: true' in a - # 'service.sp' section: - # - #service: - # sp: - # allow_unsolicited: true - - # The examples below are just used to generate our metadata xml, and you - # may well not need them, depending on your setup. Alternatively you - # may need a whole lot more detail - see the pysaml2 docs! - - #description: ["My awesome SP", "en"] - #name: ["Test SP", "en"] - - #ui_info: - # display_name: - # - lang: en - # text: "Display Name is the descriptive name of your service." - # description: - # - lang: en - # text: "Description should be a short paragraph explaining the purpose of the service." - # information_url: - # - lang: en - # text: "https://example.com/terms-of-service" - # privacy_statement_url: - # - lang: en - # text: "https://example.com/privacy-policy" - # keywords: - # - lang: en - # text: ["Matrix", "Element"] - # logo: - # - lang: en - # text: "https://example.com/logo.svg" - # width: "200" - # height: "80" - - #organization: - # name: Example com - # display_name: - # - ["Example co", "en"] - # url: "http://example.com" - - #contact_person: - # - given_name: Bob - # sur_name: "the Sysadmin" - # email_address": ["admin@example.com"] - # contact_type": technical - - # Instead of putting the config inline as above, you can specify a - # separate pysaml2 configuration file: - # - #config_path: "%(config_dir_path)s/sp_conf.py" - - # The lifetime of a SAML session. This defines how long a user has to - # complete the authentication process, if allow_unsolicited is unset. - # The default is 15 minutes. - # - #saml_session_lifetime: 5m - - # An external module can be provided here as a custom solution to - # mapping attributes returned from a saml provider onto a matrix user. - # - user_mapping_provider: - # The custom module's class. Uncomment to use a custom module. - # - #module: mapping_provider.SamlMappingProvider - - # Custom configuration values for the module. Below options are - # intended for the built-in provider, they should be changed if - # using a custom module. This section will be passed as a Python - # dictionary to the module's `parse_config` method. - # - config: - # The SAML attribute (after mapping via the attribute maps) to use - # to derive the Matrix ID from. 'uid' by default. - # - # Note: This used to be configured by the - # saml2_config.mxid_source_attribute option. If that is still - # defined, its value will be used instead. - # - #mxid_source_attribute: displayName - - # The mapping system to use for mapping the saml attribute onto a - # matrix ID. - # - # Options include: - # * 'hexencode' (which maps unpermitted characters to '=xx') - # * 'dotreplace' (which replaces unpermitted characters with - # '.'). - # The default is 'hexencode'. - # - # Note: This used to be configured by the - # saml2_config.mxid_mapping option. If that is still defined, its - # value will be used instead. - # - #mxid_mapping: dotreplace - - # In previous versions of synapse, the mapping from SAML attribute to - # MXID was always calculated dynamically rather than stored in a - # table. For backwards- compatibility, we will look for user_ids - # matching such a pattern before creating a new account. - # - # This setting controls the SAML attribute which will be used for this - # backwards-compatibility lookup. Typically it should be 'uid', but if - # the attribute maps are changed, it may be necessary to change it. - # - # The default is 'uid'. - # - #grandfathered_mxid_source_attribute: upn - - # It is possible to configure Synapse to only allow logins if SAML attributes - # match particular values. The requirements can be listed under - # `attribute_requirements` as shown below. All of the listed attributes must - # match for the login to be permitted. - # - #attribute_requirements: - # - attribute: userGroup - # value: "staff" - # - attribute: department - # value: "sales" - - # If the metadata XML contains multiple IdP entities then the `idp_entityid` - # option must be set to the entity to redirect users to. - # - # Most deployments only have a single IdP entity and so should omit this - # option. - # - #idp_entityid: 'https://our_idp/entityid' - """ % { - "config_dir_path": config_dir_path - } - - -ATTRIBUTE_REQUIREMENTS_SCHEMA = { - "type": "array", - "items": SsoAttributeRequirement.JSON_SCHEMA, -} - - -def _parse_attribute_requirements_def( - attribute_requirements: Any, -) -> List[SsoAttributeRequirement]: - validate_config( - ATTRIBUTE_REQUIREMENTS_SCHEMA, - attribute_requirements, - config_path=("saml2_config", "attribute_requirements"), - ) - return [SsoAttributeRequirement(**x) for x in attribute_requirements] diff --git a/synapse/config/server_notices.py b/synapse/config/server_notices.py new file mode 100644 index 0000000000..48bf3241b6 --- /dev/null +++ b/synapse/config/server_notices.py @@ -0,0 +1,83 @@ +# Copyright 2018 New Vector Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from synapse.types import UserID + +from ._base import Config + +DEFAULT_CONFIG = """\ +# Server Notices room configuration +# +# Uncomment this section to enable a room which can be used to send notices +# from the server to users. It is a special room which cannot be left; notices +# come from a special "notices" user id. +# +# If you uncomment this section, you *must* define the system_mxid_localpart +# setting, which defines the id of the user which will be used to send the +# notices. +# +# It's also possible to override the room name, the display name of the +# "notices" user, and the avatar for the user. +# +#server_notices: +# system_mxid_localpart: notices +# system_mxid_display_name: "Server Notices" +# system_mxid_avatar_url: "mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ" +# room_name: "Server Notices" +""" + + +class ServerNoticesConfig(Config): + """Configuration for the server notices room. + + Attributes: + server_notices_mxid (str|None): + The MXID to use for server notices. + None if server notices are not enabled. + + server_notices_mxid_display_name (str|None): + The display name to use for the server notices user. + None if server notices are not enabled. + + server_notices_mxid_avatar_url (str|None): + The MXC URL for the avatar of the server notices user. + None if server notices are not enabled. + + server_notices_room_name (str|None): + The name to use for the server notices room. + None if server notices are not enabled. + """ + + section = "servernotices" + + def __init__(self, *args): + super().__init__(*args) + self.server_notices_mxid = None + self.server_notices_mxid_display_name = None + self.server_notices_mxid_avatar_url = None + self.server_notices_room_name = None + + def read_config(self, config, **kwargs): + c = config.get("server_notices") + if c is None: + return + + mxid_localpart = c["system_mxid_localpart"] + self.server_notices_mxid = UserID(mxid_localpart, self.server_name).to_string() + self.server_notices_mxid_display_name = c.get("system_mxid_display_name", None) + self.server_notices_mxid_avatar_url = c.get("system_mxid_avatar_url", None) + # todo: i18n + self.server_notices_room_name = c.get("room_name", "Server Notices") + + def generate_config_section(self, **kwargs): + return DEFAULT_CONFIG diff --git a/synapse/config/server_notices_config.py b/synapse/config/server_notices_config.py deleted file mode 100644 index 48bf3241b6..0000000000 --- a/synapse/config/server_notices_config.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright 2018 New Vector Ltd -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from synapse.types import UserID - -from ._base import Config - -DEFAULT_CONFIG = """\ -# Server Notices room configuration -# -# Uncomment this section to enable a room which can be used to send notices -# from the server to users. It is a special room which cannot be left; notices -# come from a special "notices" user id. -# -# If you uncomment this section, you *must* define the system_mxid_localpart -# setting, which defines the id of the user which will be used to send the -# notices. -# -# It's also possible to override the room name, the display name of the -# "notices" user, and the avatar for the user. -# -#server_notices: -# system_mxid_localpart: notices -# system_mxid_display_name: "Server Notices" -# system_mxid_avatar_url: "mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ" -# room_name: "Server Notices" -""" - - -class ServerNoticesConfig(Config): - """Configuration for the server notices room. - - Attributes: - server_notices_mxid (str|None): - The MXID to use for server notices. - None if server notices are not enabled. - - server_notices_mxid_display_name (str|None): - The display name to use for the server notices user. - None if server notices are not enabled. - - server_notices_mxid_avatar_url (str|None): - The MXC URL for the avatar of the server notices user. - None if server notices are not enabled. - - server_notices_room_name (str|None): - The name to use for the server notices room. - None if server notices are not enabled. - """ - - section = "servernotices" - - def __init__(self, *args): - super().__init__(*args) - self.server_notices_mxid = None - self.server_notices_mxid_display_name = None - self.server_notices_mxid_avatar_url = None - self.server_notices_room_name = None - - def read_config(self, config, **kwargs): - c = config.get("server_notices") - if c is None: - return - - mxid_localpart = c["system_mxid_localpart"] - self.server_notices_mxid = UserID(mxid_localpart, self.server_name).to_string() - self.server_notices_mxid_display_name = c.get("system_mxid_display_name", None) - self.server_notices_mxid_avatar_url = c.get("system_mxid_avatar_url", None) - # todo: i18n - self.server_notices_room_name = c.get("room_name", "Server Notices") - - def generate_config_section(self, **kwargs): - return DEFAULT_CONFIG diff --git a/synapse/handlers/cas.py b/synapse/handlers/cas.py new file mode 100644 index 0000000000..7346ccfe93 --- /dev/null +++ b/synapse/handlers/cas.py @@ -0,0 +1,393 @@ +# Copyright 2020 The Matrix.org Foundation C.I.C. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import logging +import urllib.parse +from typing import TYPE_CHECKING, Dict, List, Optional +from xml.etree import ElementTree as ET + +import attr + +from twisted.web.client import PartialDownloadError + +from synapse.api.errors import HttpResponseException +from synapse.handlers.sso import MappingException, UserAttributes +from synapse.http.site import SynapseRequest +from synapse.types import UserID, map_username_to_mxid_localpart + +if TYPE_CHECKING: + from synapse.server import HomeServer + +logger = logging.getLogger(__name__) + + +class CasError(Exception): + """Used to catch errors when validating the CAS ticket.""" + + def __init__(self, error, error_description=None): + self.error = error + self.error_description = error_description + + def __str__(self): + if self.error_description: + return "{}: {}".format(self.error, self.error_description) + return self.error + + +@attr.s(slots=True, frozen=True) +class CasResponse: + username = attr.ib(type=str) + attributes = attr.ib(type=Dict[str, List[Optional[str]]]) + + +class CasHandler: + """ + Utility class for to handle the response from a CAS SSO service. + + Args: + hs + """ + + def __init__(self, hs: "HomeServer"): + self.hs = hs + self._hostname = hs.hostname + self._store = hs.get_datastore() + self._auth_handler = hs.get_auth_handler() + self._registration_handler = hs.get_registration_handler() + + self._cas_server_url = hs.config.cas_server_url + self._cas_service_url = hs.config.cas_service_url + self._cas_displayname_attribute = hs.config.cas_displayname_attribute + self._cas_required_attributes = hs.config.cas_required_attributes + + self._http_client = hs.get_proxied_http_client() + + # identifier for the external_ids table + self.idp_id = "cas" + + # user-facing name of this auth provider + self.idp_name = "CAS" + + # we do not currently support brands/icons for CAS auth, but this is required by + # the SsoIdentityProvider protocol type. + self.idp_icon = None + self.idp_brand = None + self.unstable_idp_brand = None + + self._sso_handler = hs.get_sso_handler() + + self._sso_handler.register_identity_provider(self) + + def _build_service_param(self, args: Dict[str, str]) -> str: + """ + Generates a value to use as the "service" parameter when redirecting or + querying the CAS service. + + Args: + args: Additional arguments to include in the final redirect URL. + + Returns: + The URL to use as a "service" parameter. + """ + return "%s?%s" % ( + self._cas_service_url, + urllib.parse.urlencode(args), + ) + + async def _validate_ticket( + self, ticket: str, service_args: Dict[str, str] + ) -> CasResponse: + """ + Validate a CAS ticket with the server, and return the parsed the response. + + Args: + ticket: The CAS ticket from the client. + service_args: Additional arguments to include in the service URL. + Should be the same as those passed to `handle_redirect_request`. + + Raises: + CasError: If there's an error parsing the CAS response. + + Returns: + The parsed CAS response. + """ + uri = self._cas_server_url + "/proxyValidate" + args = { + "ticket": ticket, + "service": self._build_service_param(service_args), + } + try: + body = await self._http_client.get_raw(uri, args) + except PartialDownloadError as pde: + # Twisted raises this error if the connection is closed, + # even if that's being used old-http style to signal end-of-data + body = pde.response + except HttpResponseException as e: + description = ( + ( + 'Authorization server responded with a "{status}" error ' + "while exchanging the authorization code." + ).format(status=e.code), + ) + raise CasError("server_error", description) from e + + return self._parse_cas_response(body) + + def _parse_cas_response(self, cas_response_body: bytes) -> CasResponse: + """ + Retrieve the user and other parameters from the CAS response. + + Args: + cas_response_body: The response from the CAS query. + + Raises: + CasError: If there's an error parsing the CAS response. + + Returns: + The parsed CAS response. + """ + + # Ensure the response is valid. + root = ET.fromstring(cas_response_body) + if not root.tag.endswith("serviceResponse"): + raise CasError( + "missing_service_response", + "root of CAS response is not serviceResponse", + ) + + success = root[0].tag.endswith("authenticationSuccess") + if not success: + raise CasError("unsucessful_response", "Unsuccessful CAS response") + + # Iterate through the nodes and pull out the user and any extra attributes. + user = None + attributes = {} # type: Dict[str, List[Optional[str]]] + for child in root[0]: + if child.tag.endswith("user"): + user = child.text + if child.tag.endswith("attributes"): + for attribute in child: + # ElementTree library expands the namespace in + # attribute tags to the full URL of the namespace. + # We don't care about namespace here and it will always + # be encased in curly braces, so we remove them. + tag = attribute.tag + if "}" in tag: + tag = tag.split("}")[1] + attributes.setdefault(tag, []).append(attribute.text) + + # Ensure a user was found. + if user is None: + raise CasError("no_user", "CAS response does not contain user") + + return CasResponse(user, attributes) + + async def handle_redirect_request( + self, + request: SynapseRequest, + client_redirect_url: Optional[bytes], + ui_auth_session_id: Optional[str] = None, + ) -> str: + """Generates a URL for the CAS server where the client should be redirected. + + Args: + request: the incoming HTTP request + client_redirect_url: the URL that we should redirect the + client to after login (or None for UI Auth). + ui_auth_session_id: The session ID of the ongoing UI Auth (or + None if this is a login). + + Returns: + URL to redirect to + """ + + if ui_auth_session_id: + service_args = {"session": ui_auth_session_id} + else: + assert client_redirect_url + service_args = {"redirectUrl": client_redirect_url.decode("utf8")} + + args = urllib.parse.urlencode( + {"service": self._build_service_param(service_args)} + ) + + return "%s/login?%s" % (self._cas_server_url, args) + + async def handle_ticket( + self, + request: SynapseRequest, + ticket: str, + client_redirect_url: Optional[str], + session: Optional[str], + ) -> None: + """ + Called once the user has successfully authenticated with the SSO. + Validates a CAS ticket sent by the client and completes the auth process. + + If the user interactive authentication session is provided, marks the + UI Auth session as complete, then returns an HTML page notifying the + user they are done. + + Otherwise, this registers the user if necessary, and then returns a + redirect (with a login token) to the client. + + Args: + request: the incoming request from the browser. We'll + respond to it with a redirect or an HTML page. + + ticket: The CAS ticket provided by the client. + + client_redirect_url: the redirectUrl parameter from the `/cas/ticket` HTTP request, if given. + This should be the same as the redirectUrl from the original `/login/sso/redirect` request. + + session: The session parameter from the `/cas/ticket` HTTP request, if given. + This should be the UI Auth session id. + """ + args = {} + if client_redirect_url: + args["redirectUrl"] = client_redirect_url + if session: + args["session"] = session + + try: + cas_response = await self._validate_ticket(ticket, args) + except CasError as e: + logger.exception("Could not validate ticket") + self._sso_handler.render_error(request, e.error, e.error_description, 401) + return + + await self._handle_cas_response( + request, cas_response, client_redirect_url, session + ) + + async def _handle_cas_response( + self, + request: SynapseRequest, + cas_response: CasResponse, + client_redirect_url: Optional[str], + session: Optional[str], + ) -> None: + """Handle a CAS response to a ticket request. + + Assumes that the response has been validated. Maps the user onto an MXID, + registering them if necessary, and returns a response to the browser. + + Args: + request: the incoming request from the browser. We'll respond to it with an + HTML page or a redirect + + cas_response: The parsed CAS response. + + client_redirect_url: the redirectUrl parameter from the `/cas/ticket` HTTP request, if given. + This should be the same as the redirectUrl from the original `/login/sso/redirect` request. + + session: The session parameter from the `/cas/ticket` HTTP request, if given. + This should be the UI Auth session id. + """ + + # first check if we're doing a UIA + if session: + return await self._sso_handler.complete_sso_ui_auth_request( + self.idp_id, + cas_response.username, + session, + request, + ) + + # otherwise, we're handling a login request. + + # Ensure that the attributes of the logged in user meet the required + # attributes. + if not self._sso_handler.check_required_attributes( + request, cas_response.attributes, self._cas_required_attributes + ): + return + + # Call the mapper to register/login the user + + # If this not a UI auth request than there must be a redirect URL. + assert client_redirect_url is not None + + try: + await self._complete_cas_login(cas_response, request, client_redirect_url) + except MappingException as e: + logger.exception("Could not map user") + self._sso_handler.render_error(request, "mapping_error", str(e)) + + async def _complete_cas_login( + self, + cas_response: CasResponse, + request: SynapseRequest, + client_redirect_url: str, + ) -> None: + """ + Given a CAS response, complete the login flow + + Retrieves the remote user ID, registers the user if necessary, and serves + a redirect back to the client with a login-token. + + Args: + cas_response: The parsed CAS response. + request: The request to respond to + client_redirect_url: The redirect URL passed in by the client. + + Raises: + MappingException if there was a problem mapping the response to a user. + RedirectException: some mapping providers may raise this if they need + to redirect to an interstitial page. + """ + # Note that CAS does not support a mapping provider, so the logic is hard-coded. + localpart = map_username_to_mxid_localpart(cas_response.username) + + async def cas_response_to_user_attributes(failures: int) -> UserAttributes: + """ + Map from CAS attributes to user attributes. + """ + # Due to the grandfathering logic matching any previously registered + # mxids it isn't expected for there to be any failures. + if failures: + raise RuntimeError("CAS is not expected to de-duplicate Matrix IDs") + + # Arbitrarily use the first attribute found. + display_name = cas_response.attributes.get( + self._cas_displayname_attribute, [None] + )[0] + + return UserAttributes(localpart=localpart, display_name=display_name) + + async def grandfather_existing_users() -> Optional[str]: + # Since CAS did not always use the user_external_ids table, always + # to attempt to map to existing users. + user_id = UserID(localpart, self._hostname).to_string() + + logger.debug( + "Looking for existing account based on mapped %s", + user_id, + ) + + users = await self._store.get_users_by_id_case_insensitive(user_id) + if users: + registered_user_id = list(users.keys())[0] + logger.info("Grandfathering mapping to %s", registered_user_id) + return registered_user_id + + return None + + await self._sso_handler.complete_sso_login_request( + self.idp_id, + cas_response.username, + request, + client_redirect_url, + cas_response_to_user_attributes, + grandfather_existing_users, + ) diff --git a/synapse/handlers/cas_handler.py b/synapse/handlers/cas_handler.py deleted file mode 100644 index 7346ccfe93..0000000000 --- a/synapse/handlers/cas_handler.py +++ /dev/null @@ -1,393 +0,0 @@ -# Copyright 2020 The Matrix.org Foundation C.I.C. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import logging -import urllib.parse -from typing import TYPE_CHECKING, Dict, List, Optional -from xml.etree import ElementTree as ET - -import attr - -from twisted.web.client import PartialDownloadError - -from synapse.api.errors import HttpResponseException -from synapse.handlers.sso import MappingException, UserAttributes -from synapse.http.site import SynapseRequest -from synapse.types import UserID, map_username_to_mxid_localpart - -if TYPE_CHECKING: - from synapse.server import HomeServer - -logger = logging.getLogger(__name__) - - -class CasError(Exception): - """Used to catch errors when validating the CAS ticket.""" - - def __init__(self, error, error_description=None): - self.error = error - self.error_description = error_description - - def __str__(self): - if self.error_description: - return "{}: {}".format(self.error, self.error_description) - return self.error - - -@attr.s(slots=True, frozen=True) -class CasResponse: - username = attr.ib(type=str) - attributes = attr.ib(type=Dict[str, List[Optional[str]]]) - - -class CasHandler: - """ - Utility class for to handle the response from a CAS SSO service. - - Args: - hs - """ - - def __init__(self, hs: "HomeServer"): - self.hs = hs - self._hostname = hs.hostname - self._store = hs.get_datastore() - self._auth_handler = hs.get_auth_handler() - self._registration_handler = hs.get_registration_handler() - - self._cas_server_url = hs.config.cas_server_url - self._cas_service_url = hs.config.cas_service_url - self._cas_displayname_attribute = hs.config.cas_displayname_attribute - self._cas_required_attributes = hs.config.cas_required_attributes - - self._http_client = hs.get_proxied_http_client() - - # identifier for the external_ids table - self.idp_id = "cas" - - # user-facing name of this auth provider - self.idp_name = "CAS" - - # we do not currently support brands/icons for CAS auth, but this is required by - # the SsoIdentityProvider protocol type. - self.idp_icon = None - self.idp_brand = None - self.unstable_idp_brand = None - - self._sso_handler = hs.get_sso_handler() - - self._sso_handler.register_identity_provider(self) - - def _build_service_param(self, args: Dict[str, str]) -> str: - """ - Generates a value to use as the "service" parameter when redirecting or - querying the CAS service. - - Args: - args: Additional arguments to include in the final redirect URL. - - Returns: - The URL to use as a "service" parameter. - """ - return "%s?%s" % ( - self._cas_service_url, - urllib.parse.urlencode(args), - ) - - async def _validate_ticket( - self, ticket: str, service_args: Dict[str, str] - ) -> CasResponse: - """ - Validate a CAS ticket with the server, and return the parsed the response. - - Args: - ticket: The CAS ticket from the client. - service_args: Additional arguments to include in the service URL. - Should be the same as those passed to `handle_redirect_request`. - - Raises: - CasError: If there's an error parsing the CAS response. - - Returns: - The parsed CAS response. - """ - uri = self._cas_server_url + "/proxyValidate" - args = { - "ticket": ticket, - "service": self._build_service_param(service_args), - } - try: - body = await self._http_client.get_raw(uri, args) - except PartialDownloadError as pde: - # Twisted raises this error if the connection is closed, - # even if that's being used old-http style to signal end-of-data - body = pde.response - except HttpResponseException as e: - description = ( - ( - 'Authorization server responded with a "{status}" error ' - "while exchanging the authorization code." - ).format(status=e.code), - ) - raise CasError("server_error", description) from e - - return self._parse_cas_response(body) - - def _parse_cas_response(self, cas_response_body: bytes) -> CasResponse: - """ - Retrieve the user and other parameters from the CAS response. - - Args: - cas_response_body: The response from the CAS query. - - Raises: - CasError: If there's an error parsing the CAS response. - - Returns: - The parsed CAS response. - """ - - # Ensure the response is valid. - root = ET.fromstring(cas_response_body) - if not root.tag.endswith("serviceResponse"): - raise CasError( - "missing_service_response", - "root of CAS response is not serviceResponse", - ) - - success = root[0].tag.endswith("authenticationSuccess") - if not success: - raise CasError("unsucessful_response", "Unsuccessful CAS response") - - # Iterate through the nodes and pull out the user and any extra attributes. - user = None - attributes = {} # type: Dict[str, List[Optional[str]]] - for child in root[0]: - if child.tag.endswith("user"): - user = child.text - if child.tag.endswith("attributes"): - for attribute in child: - # ElementTree library expands the namespace in - # attribute tags to the full URL of the namespace. - # We don't care about namespace here and it will always - # be encased in curly braces, so we remove them. - tag = attribute.tag - if "}" in tag: - tag = tag.split("}")[1] - attributes.setdefault(tag, []).append(attribute.text) - - # Ensure a user was found. - if user is None: - raise CasError("no_user", "CAS response does not contain user") - - return CasResponse(user, attributes) - - async def handle_redirect_request( - self, - request: SynapseRequest, - client_redirect_url: Optional[bytes], - ui_auth_session_id: Optional[str] = None, - ) -> str: - """Generates a URL for the CAS server where the client should be redirected. - - Args: - request: the incoming HTTP request - client_redirect_url: the URL that we should redirect the - client to after login (or None for UI Auth). - ui_auth_session_id: The session ID of the ongoing UI Auth (or - None if this is a login). - - Returns: - URL to redirect to - """ - - if ui_auth_session_id: - service_args = {"session": ui_auth_session_id} - else: - assert client_redirect_url - service_args = {"redirectUrl": client_redirect_url.decode("utf8")} - - args = urllib.parse.urlencode( - {"service": self._build_service_param(service_args)} - ) - - return "%s/login?%s" % (self._cas_server_url, args) - - async def handle_ticket( - self, - request: SynapseRequest, - ticket: str, - client_redirect_url: Optional[str], - session: Optional[str], - ) -> None: - """ - Called once the user has successfully authenticated with the SSO. - Validates a CAS ticket sent by the client and completes the auth process. - - If the user interactive authentication session is provided, marks the - UI Auth session as complete, then returns an HTML page notifying the - user they are done. - - Otherwise, this registers the user if necessary, and then returns a - redirect (with a login token) to the client. - - Args: - request: the incoming request from the browser. We'll - respond to it with a redirect or an HTML page. - - ticket: The CAS ticket provided by the client. - - client_redirect_url: the redirectUrl parameter from the `/cas/ticket` HTTP request, if given. - This should be the same as the redirectUrl from the original `/login/sso/redirect` request. - - session: The session parameter from the `/cas/ticket` HTTP request, if given. - This should be the UI Auth session id. - """ - args = {} - if client_redirect_url: - args["redirectUrl"] = client_redirect_url - if session: - args["session"] = session - - try: - cas_response = await self._validate_ticket(ticket, args) - except CasError as e: - logger.exception("Could not validate ticket") - self._sso_handler.render_error(request, e.error, e.error_description, 401) - return - - await self._handle_cas_response( - request, cas_response, client_redirect_url, session - ) - - async def _handle_cas_response( - self, - request: SynapseRequest, - cas_response: CasResponse, - client_redirect_url: Optional[str], - session: Optional[str], - ) -> None: - """Handle a CAS response to a ticket request. - - Assumes that the response has been validated. Maps the user onto an MXID, - registering them if necessary, and returns a response to the browser. - - Args: - request: the incoming request from the browser. We'll respond to it with an - HTML page or a redirect - - cas_response: The parsed CAS response. - - client_redirect_url: the redirectUrl parameter from the `/cas/ticket` HTTP request, if given. - This should be the same as the redirectUrl from the original `/login/sso/redirect` request. - - session: The session parameter from the `/cas/ticket` HTTP request, if given. - This should be the UI Auth session id. - """ - - # first check if we're doing a UIA - if session: - return await self._sso_handler.complete_sso_ui_auth_request( - self.idp_id, - cas_response.username, - session, - request, - ) - - # otherwise, we're handling a login request. - - # Ensure that the attributes of the logged in user meet the required - # attributes. - if not self._sso_handler.check_required_attributes( - request, cas_response.attributes, self._cas_required_attributes - ): - return - - # Call the mapper to register/login the user - - # If this not a UI auth request than there must be a redirect URL. - assert client_redirect_url is not None - - try: - await self._complete_cas_login(cas_response, request, client_redirect_url) - except MappingException as e: - logger.exception("Could not map user") - self._sso_handler.render_error(request, "mapping_error", str(e)) - - async def _complete_cas_login( - self, - cas_response: CasResponse, - request: SynapseRequest, - client_redirect_url: str, - ) -> None: - """ - Given a CAS response, complete the login flow - - Retrieves the remote user ID, registers the user if necessary, and serves - a redirect back to the client with a login-token. - - Args: - cas_response: The parsed CAS response. - request: The request to respond to - client_redirect_url: The redirect URL passed in by the client. - - Raises: - MappingException if there was a problem mapping the response to a user. - RedirectException: some mapping providers may raise this if they need - to redirect to an interstitial page. - """ - # Note that CAS does not support a mapping provider, so the logic is hard-coded. - localpart = map_username_to_mxid_localpart(cas_response.username) - - async def cas_response_to_user_attributes(failures: int) -> UserAttributes: - """ - Map from CAS attributes to user attributes. - """ - # Due to the grandfathering logic matching any previously registered - # mxids it isn't expected for there to be any failures. - if failures: - raise RuntimeError("CAS is not expected to de-duplicate Matrix IDs") - - # Arbitrarily use the first attribute found. - display_name = cas_response.attributes.get( - self._cas_displayname_attribute, [None] - )[0] - - return UserAttributes(localpart=localpart, display_name=display_name) - - async def grandfather_existing_users() -> Optional[str]: - # Since CAS did not always use the user_external_ids table, always - # to attempt to map to existing users. - user_id = UserID(localpart, self._hostname).to_string() - - logger.debug( - "Looking for existing account based on mapped %s", - user_id, - ) - - users = await self._store.get_users_by_id_case_insensitive(user_id) - if users: - registered_user_id = list(users.keys())[0] - logger.info("Grandfathering mapping to %s", registered_user_id) - return registered_user_id - - return None - - await self._sso_handler.complete_sso_login_request( - self.idp_id, - cas_response.username, - request, - client_redirect_url, - cas_response_to_user_attributes, - grandfather_existing_users, - ) diff --git a/synapse/handlers/oidc.py b/synapse/handlers/oidc.py new file mode 100644 index 0000000000..45514be50f --- /dev/null +++ b/synapse/handlers/oidc.py @@ -0,0 +1,1384 @@ +# Copyright 2020 Quentin Gliech +# Copyright 2021 The Matrix.org Foundation C.I.C. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import inspect +import logging +from typing import TYPE_CHECKING, Dict, Generic, List, Optional, TypeVar, Union +from urllib.parse import urlencode + +import attr +import pymacaroons +from authlib.common.security import generate_token +from authlib.jose import JsonWebToken, jwt +from authlib.oauth2.auth import ClientAuth +from authlib.oauth2.rfc6749.parameters import prepare_grant_uri +from authlib.oidc.core import CodeIDToken, ImplicitIDToken, UserInfo +from authlib.oidc.discovery import OpenIDProviderMetadata, get_well_known_url +from jinja2 import Environment, Template +from pymacaroons.exceptions import ( + MacaroonDeserializationException, + MacaroonInitException, + MacaroonInvalidSignatureException, +) +from typing_extensions import TypedDict + +from twisted.web.client import readBody +from twisted.web.http_headers import Headers + +from synapse.config import ConfigError +from synapse.config.oidc import OidcProviderClientSecretJwtKey, OidcProviderConfig +from synapse.handlers.sso import MappingException, UserAttributes +from synapse.http.site import SynapseRequest +from synapse.logging.context import make_deferred_yieldable +from synapse.types import JsonDict, UserID, map_username_to_mxid_localpart +from synapse.util import Clock, json_decoder +from synapse.util.caches.cached_call import RetryOnExceptionCachedCall +from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry + +if TYPE_CHECKING: + from synapse.server import HomeServer + +logger = logging.getLogger(__name__) + +# we want the cookie to be returned to us even when the request is the POSTed +# result of a form on another domain, as is used with `response_mode=form_post`. +# +# Modern browsers will not do so unless we set SameSite=None; however *older* +# browsers (including all versions of Safari on iOS 12?) don't support +# SameSite=None, and interpret it as SameSite=Strict: +# https://bugs.webkit.org/show_bug.cgi?id=198181 +# +# As a rather painful workaround, we set *two* cookies, one with SameSite=None +# and one with no SameSite, in the hope that at least one of them will get +# back to us. +# +# Secure is necessary for SameSite=None (and, empirically, also breaks things +# on iOS 12.) +# +# Here we have the names of the cookies, and the options we use to set them. +_SESSION_COOKIES = [ + (b"oidc_session", b"Path=/_synapse/client/oidc; HttpOnly; Secure; SameSite=None"), + (b"oidc_session_no_samesite", b"Path=/_synapse/client/oidc; HttpOnly"), +] + +#: A token exchanged from the token endpoint, as per RFC6749 sec 5.1. and +#: OpenID.Core sec 3.1.3.3. +Token = TypedDict( + "Token", + { + "access_token": str, + "token_type": str, + "id_token": Optional[str], + "refresh_token": Optional[str], + "expires_in": int, + "scope": Optional[str], + }, +) + +#: A JWK, as per RFC7517 sec 4. The type could be more precise than that, but +#: there is no real point of doing this in our case. +JWK = Dict[str, str] + +#: A JWK Set, as per RFC7517 sec 5. +JWKS = TypedDict("JWKS", {"keys": List[JWK]}) + + +class OidcHandler: + """Handles requests related to the OpenID Connect login flow.""" + + def __init__(self, hs: "HomeServer"): + self._sso_handler = hs.get_sso_handler() + + provider_confs = hs.config.oidc.oidc_providers + # we should not have been instantiated if there is no configured provider. + assert provider_confs + + self._token_generator = OidcSessionTokenGenerator(hs) + self._providers = { + p.idp_id: OidcProvider(hs, self._token_generator, p) for p in provider_confs + } # type: Dict[str, OidcProvider] + + async def load_metadata(self) -> None: + """Validate the config and load the metadata from the remote endpoint. + + Called at startup to ensure we have everything we need. + """ + for idp_id, p in self._providers.items(): + try: + await p.load_metadata() + await p.load_jwks() + except Exception as e: + raise Exception( + "Error while initialising OIDC provider %r" % (idp_id,) + ) from e + + async def handle_oidc_callback(self, request: SynapseRequest) -> None: + """Handle an incoming request to /_synapse/client/oidc/callback + + Since we might want to display OIDC-related errors in a user-friendly + way, we don't raise SynapseError from here. Instead, we call + ``self._sso_handler.render_error`` which displays an HTML page for the error. + + Most of the OpenID Connect logic happens here: + + - first, we check if there was any error returned by the provider and + display it + - then we fetch the session cookie, decode and verify it + - the ``state`` query parameter should match with the one stored in the + session cookie + + Once we know the session is legit, we then delegate to the OIDC Provider + implementation, which will exchange the code with the provider and complete the + login/authentication. + + Args: + request: the incoming request from the browser. + """ + # This will always be set by the time Twisted calls us. + assert request.args is not None + + # The provider might redirect with an error. + # In that case, just display it as-is. + if b"error" in request.args: + # error response from the auth server. see: + # https://tools.ietf.org/html/rfc6749#section-4.1.2.1 + # https://openid.net/specs/openid-connect-core-1_0.html#AuthError + error = request.args[b"error"][0].decode() + description = request.args.get(b"error_description", [b""])[0].decode() + + # Most of the errors returned by the provider could be due by + # either the provider misbehaving or Synapse being misconfigured. + # The only exception of that is "access_denied", where the user + # probably cancelled the login flow. In other cases, log those errors. + logger.log( + logging.INFO if error == "access_denied" else logging.ERROR, + "Received OIDC callback with error: %s %s", + error, + description, + ) + + self._sso_handler.render_error(request, error, description) + return + + # otherwise, it is presumably a successful response. see: + # https://tools.ietf.org/html/rfc6749#section-4.1.2 + + # Fetch the session cookie. See the comments on SESSION_COOKIES for why there + # are two. + + for cookie_name, _ in _SESSION_COOKIES: + session = request.getCookie(cookie_name) # type: Optional[bytes] + if session is not None: + break + else: + logger.info("Received OIDC callback, with no session cookie") + self._sso_handler.render_error( + request, "missing_session", "No session cookie found" + ) + return + + # Remove the cookies. There is a good chance that if the callback failed + # once, it will fail next time and the code will already be exchanged. + # Removing the cookies early avoids spamming the provider with token requests. + # + # we have to build the header by hand rather than calling request.addCookie + # because the latter does not support SameSite=None + # (https://twistedmatrix.com/trac/ticket/10088) + + for cookie_name, options in _SESSION_COOKIES: + request.cookies.append( + b"%s=; Expires=Thu, Jan 01 1970 00:00:00 UTC; %s" + % (cookie_name, options) + ) + + # Check for the state query parameter + if b"state" not in request.args: + logger.info("Received OIDC callback, with no state parameter") + self._sso_handler.render_error( + request, "invalid_request", "State parameter is missing" + ) + return + + state = request.args[b"state"][0].decode() + + # Deserialize the session token and verify it. + try: + session_data = self._token_generator.verify_oidc_session_token( + session, state + ) + except (MacaroonInitException, MacaroonDeserializationException, KeyError) as e: + logger.exception("Invalid session for OIDC callback") + self._sso_handler.render_error(request, "invalid_session", str(e)) + return + except MacaroonInvalidSignatureException as e: + logger.exception("Could not verify session for OIDC callback") + self._sso_handler.render_error(request, "mismatching_session", str(e)) + return + + logger.info("Received OIDC callback for IdP %s", session_data.idp_id) + + oidc_provider = self._providers.get(session_data.idp_id) + if not oidc_provider: + logger.error("OIDC session uses unknown IdP %r", oidc_provider) + self._sso_handler.render_error(request, "unknown_idp", "Unknown IdP") + return + + if b"code" not in request.args: + logger.info("Code parameter is missing") + self._sso_handler.render_error( + request, "invalid_request", "Code parameter is missing" + ) + return + + code = request.args[b"code"][0].decode() + + await oidc_provider.handle_oidc_callback(request, session_data, code) + + +class OidcError(Exception): + """Used to catch errors when calling the token_endpoint""" + + def __init__(self, error, error_description=None): + self.error = error + self.error_description = error_description + + def __str__(self): + if self.error_description: + return "{}: {}".format(self.error, self.error_description) + return self.error + + +class OidcProvider: + """Wraps the config for a single OIDC IdentityProvider + + Provides methods for handling redirect requests and callbacks via that particular + IdP. + """ + + def __init__( + self, + hs: "HomeServer", + token_generator: "OidcSessionTokenGenerator", + provider: OidcProviderConfig, + ): + self._store = hs.get_datastore() + + self._token_generator = token_generator + + self._config = provider + self._callback_url = hs.config.oidc_callback_url # type: str + + self._oidc_attribute_requirements = provider.attribute_requirements + self._scopes = provider.scopes + self._user_profile_method = provider.user_profile_method + + client_secret = None # type: Union[None, str, JwtClientSecret] + if provider.client_secret: + client_secret = provider.client_secret + elif provider.client_secret_jwt_key: + client_secret = JwtClientSecret( + provider.client_secret_jwt_key, + provider.client_id, + provider.issuer, + hs.get_clock(), + ) + + self._client_auth = ClientAuth( + provider.client_id, + client_secret, + provider.client_auth_method, + ) # type: ClientAuth + self._client_auth_method = provider.client_auth_method + + # cache of metadata for the identity provider (endpoint uris, mostly). This is + # loaded on-demand from the discovery endpoint (if discovery is enabled), with + # possible overrides from the config. Access via `load_metadata`. + self._provider_metadata = RetryOnExceptionCachedCall(self._load_metadata) + + # cache of JWKs used by the identity provider to sign tokens. Loaded on demand + # from the IdP's jwks_uri, if required. + self._jwks = RetryOnExceptionCachedCall(self._load_jwks) + + self._user_mapping_provider = provider.user_mapping_provider_class( + provider.user_mapping_provider_config + ) + self._skip_verification = provider.skip_verification + self._allow_existing_users = provider.allow_existing_users + + self._http_client = hs.get_proxied_http_client() + self._server_name = hs.config.server_name # type: str + + # identifier for the external_ids table + self.idp_id = provider.idp_id + + # user-facing name of this auth provider + self.idp_name = provider.idp_name + + # MXC URI for icon for this auth provider + self.idp_icon = provider.idp_icon + + # optional brand identifier for this auth provider + self.idp_brand = provider.idp_brand + + # Optional brand identifier for the unstable API (see MSC2858). + self.unstable_idp_brand = provider.unstable_idp_brand + + self._sso_handler = hs.get_sso_handler() + + self._sso_handler.register_identity_provider(self) + + def _validate_metadata(self, m: OpenIDProviderMetadata) -> None: + """Verifies the provider metadata. + + This checks the validity of the currently loaded provider. Not + everything is checked, only: + + - ``issuer`` + - ``authorization_endpoint`` + - ``token_endpoint`` + - ``response_types_supported`` (checks if "code" is in it) + - ``jwks_uri`` + + Raises: + ValueError: if something in the provider is not valid + """ + # Skip verification to allow non-compliant providers (e.g. issuers not running on a secure origin) + if self._skip_verification is True: + return + + m.validate_issuer() + m.validate_authorization_endpoint() + m.validate_token_endpoint() + + if m.get("token_endpoint_auth_methods_supported") is not None: + m.validate_token_endpoint_auth_methods_supported() + if ( + self._client_auth_method + not in m["token_endpoint_auth_methods_supported"] + ): + raise ValueError( + '"{auth_method}" not in "token_endpoint_auth_methods_supported" ({supported!r})'.format( + auth_method=self._client_auth_method, + supported=m["token_endpoint_auth_methods_supported"], + ) + ) + + if m.get("response_types_supported") is not None: + m.validate_response_types_supported() + + if "code" not in m["response_types_supported"]: + raise ValueError( + '"code" not in "response_types_supported" (%r)' + % (m["response_types_supported"],) + ) + + # Ensure there's a userinfo endpoint to fetch from if it is required. + if self._uses_userinfo: + if m.get("userinfo_endpoint") is None: + raise ValueError( + 'provider has no "userinfo_endpoint", even though it is required' + ) + else: + # If we're not using userinfo, we need a valid jwks to validate the ID token + m.validate_jwks_uri() + + @property + def _uses_userinfo(self) -> bool: + """Returns True if the ``userinfo_endpoint`` should be used. + + This is based on the requested scopes: if the scopes include + ``openid``, the provider should give use an ID token containing the + user information. If not, we should fetch them using the + ``access_token`` with the ``userinfo_endpoint``. + """ + + return ( + "openid" not in self._scopes + or self._user_profile_method == "userinfo_endpoint" + ) + + async def load_metadata(self, force: bool = False) -> OpenIDProviderMetadata: + """Return the provider metadata. + + If this is the first call, the metadata is built from the config and from the + metadata discovery endpoint (if enabled), and then validated. If the metadata + is successfully validated, it is then cached for future use. + + Args: + force: If true, any cached metadata is discarded to force a reload. + + Raises: + ValueError: if something in the provider is not valid + + Returns: + The provider's metadata. + """ + if force: + # reset the cached call to ensure we get a new result + self._provider_metadata = RetryOnExceptionCachedCall(self._load_metadata) + + return await self._provider_metadata.get() + + async def _load_metadata(self) -> OpenIDProviderMetadata: + # start out with just the issuer (unlike the other settings, discovered issuer + # takes precedence over configured issuer, because configured issuer is + # required for discovery to take place.) + # + metadata = OpenIDProviderMetadata(issuer=self._config.issuer) + + # load any data from the discovery endpoint, if enabled + if self._config.discover: + url = get_well_known_url(self._config.issuer, external=True) + metadata_response = await self._http_client.get_json(url) + metadata.update(metadata_response) + + # override any discovered data with any settings in our config + if self._config.authorization_endpoint: + metadata["authorization_endpoint"] = self._config.authorization_endpoint + + if self._config.token_endpoint: + metadata["token_endpoint"] = self._config.token_endpoint + + if self._config.userinfo_endpoint: + metadata["userinfo_endpoint"] = self._config.userinfo_endpoint + + if self._config.jwks_uri: + metadata["jwks_uri"] = self._config.jwks_uri + + self._validate_metadata(metadata) + + return metadata + + async def load_jwks(self, force: bool = False) -> JWKS: + """Load the JSON Web Key Set used to sign ID tokens. + + If we're not using the ``userinfo_endpoint``, user infos are extracted + from the ID token, which is a JWT signed by keys given by the provider. + The keys are then cached. + + Args: + force: Force reloading the keys. + + Returns: + The key set + + Looks like this:: + + { + 'keys': [ + { + 'kid': 'abcdef', + 'kty': 'RSA', + 'alg': 'RS256', + 'use': 'sig', + 'e': 'XXXX', + 'n': 'XXXX', + } + ] + } + """ + if force: + # reset the cached call to ensure we get a new result + self._jwks = RetryOnExceptionCachedCall(self._load_jwks) + return await self._jwks.get() + + async def _load_jwks(self) -> JWKS: + if self._uses_userinfo: + # We're not using jwt signing, return an empty jwk set + return {"keys": []} + + metadata = await self.load_metadata() + + # Load the JWKS using the `jwks_uri` metadata. + uri = metadata.get("jwks_uri") + if not uri: + # this should be unreachable: load_metadata validates that + # there is a jwks_uri in the metadata if _uses_userinfo is unset + raise RuntimeError('Missing "jwks_uri" in metadata') + + jwk_set = await self._http_client.get_json(uri) + + return jwk_set + + async def _exchange_code(self, code: str) -> Token: + """Exchange an authorization code for a token. + + This calls the ``token_endpoint`` with the authorization code we + received in the callback to exchange it for a token. The call uses the + ``ClientAuth`` to authenticate with the client with its ID and secret. + + See: + https://tools.ietf.org/html/rfc6749#section-3.2 + https://openid.net/specs/openid-connect-core-1_0.html#TokenEndpoint + + Args: + code: The authorization code we got from the callback. + + Returns: + A dict containing various tokens. + + May look like this:: + + { + 'token_type': 'bearer', + 'access_token': 'abcdef', + 'expires_in': 3599, + 'id_token': 'ghijkl', + 'refresh_token': 'mnopqr', + } + + Raises: + OidcError: when the ``token_endpoint`` returned an error. + """ + metadata = await self.load_metadata() + token_endpoint = metadata.get("token_endpoint") + raw_headers = { + "Content-Type": "application/x-www-form-urlencoded", + "User-Agent": self._http_client.user_agent, + "Accept": "application/json", + } + + args = { + "grant_type": "authorization_code", + "code": code, + "redirect_uri": self._callback_url, + } + body = urlencode(args, True) + + # Fill the body/headers with credentials + uri, raw_headers, body = self._client_auth.prepare( + method="POST", uri=token_endpoint, headers=raw_headers, body=body + ) + headers = Headers({k: [v] for (k, v) in raw_headers.items()}) + + # Do the actual request + # We're not using the SimpleHttpClient util methods as we don't want to + # check the HTTP status code and we do the body encoding ourself. + response = await self._http_client.request( + method="POST", + uri=uri, + data=body.encode("utf-8"), + headers=headers, + ) + + # This is used in multiple error messages below + status = "{code} {phrase}".format( + code=response.code, phrase=response.phrase.decode("utf-8") + ) + + resp_body = await make_deferred_yieldable(readBody(response)) + + if response.code >= 500: + # In case of a server error, we should first try to decode the body + # and check for an error field. If not, we respond with a generic + # error message. + try: + resp = json_decoder.decode(resp_body.decode("utf-8")) + error = resp["error"] + description = resp.get("error_description", error) + except (ValueError, KeyError): + # Catch ValueError for the JSON decoding and KeyError for the "error" field + error = "server_error" + description = ( + ( + 'Authorization server responded with a "{status}" error ' + "while exchanging the authorization code." + ).format(status=status), + ) + + raise OidcError(error, description) + + # Since it is a not a 5xx code, body should be a valid JSON. It will + # raise if not. + resp = json_decoder.decode(resp_body.decode("utf-8")) + + if "error" in resp: + error = resp["error"] + # In case the authorization server responded with an error field, + # it should be a 4xx code. If not, warn about it but don't do + # anything special and report the original error message. + if response.code < 400: + logger.debug( + "Invalid response from the authorization server: " + 'responded with a "{status}" ' + "but body has an error field: {error!r}".format( + status=status, error=resp["error"] + ) + ) + + description = resp.get("error_description", error) + raise OidcError(error, description) + + # Now, this should not be an error. According to RFC6749 sec 5.1, it + # should be a 200 code. We're a bit more flexible than that, and will + # only throw on a 4xx code. + if response.code >= 400: + description = ( + 'Authorization server responded with a "{status}" error ' + 'but did not include an "error" field in its response.'.format( + status=status + ) + ) + logger.warning(description) + # Body was still valid JSON. Might be useful to log it for debugging. + logger.warning("Code exchange response: {resp!r}".format(resp=resp)) + raise OidcError("server_error", description) + + return resp + + async def _fetch_userinfo(self, token: Token) -> UserInfo: + """Fetch user information from the ``userinfo_endpoint``. + + Args: + token: the token given by the ``token_endpoint``. + Must include an ``access_token`` field. + + Returns: + UserInfo: an object representing the user. + """ + logger.debug("Using the OAuth2 access_token to request userinfo") + metadata = await self.load_metadata() + + resp = await self._http_client.get_json( + metadata["userinfo_endpoint"], + headers={"Authorization": ["Bearer {}".format(token["access_token"])]}, + ) + + logger.debug("Retrieved user info from userinfo endpoint: %r", resp) + + return UserInfo(resp) + + async def _parse_id_token(self, token: Token, nonce: str) -> UserInfo: + """Return an instance of UserInfo from token's ``id_token``. + + Args: + token: the token given by the ``token_endpoint``. + Must include an ``id_token`` field. + nonce: the nonce value originally sent in the initial authorization + request. This value should match the one inside the token. + + Returns: + An object representing the user. + """ + metadata = await self.load_metadata() + claims_params = { + "nonce": nonce, + "client_id": self._client_auth.client_id, + } + if "access_token" in token: + # If we got an `access_token`, there should be an `at_hash` claim + # in the `id_token` that we can check against. + claims_params["access_token"] = token["access_token"] + claims_cls = CodeIDToken + else: + claims_cls = ImplicitIDToken + + alg_values = metadata.get("id_token_signing_alg_values_supported", ["RS256"]) + jwt = JsonWebToken(alg_values) + + claim_options = {"iss": {"values": [metadata["issuer"]]}} + + id_token = token["id_token"] + logger.debug("Attempting to decode JWT id_token %r", id_token) + + # Try to decode the keys in cache first, then retry by forcing the keys + # to be reloaded + jwk_set = await self.load_jwks() + try: + claims = jwt.decode( + id_token, + key=jwk_set, + claims_cls=claims_cls, + claims_options=claim_options, + claims_params=claims_params, + ) + except ValueError: + logger.info("Reloading JWKS after decode error") + jwk_set = await self.load_jwks(force=True) # try reloading the jwks + claims = jwt.decode( + id_token, + key=jwk_set, + claims_cls=claims_cls, + claims_options=claim_options, + claims_params=claims_params, + ) + + logger.debug("Decoded id_token JWT %r; validating", claims) + + claims.validate(leeway=120) # allows 2 min of clock skew + return UserInfo(claims) + + async def handle_redirect_request( + self, + request: SynapseRequest, + client_redirect_url: Optional[bytes], + ui_auth_session_id: Optional[str] = None, + ) -> str: + """Handle an incoming request to /login/sso/redirect + + It returns a redirect to the authorization endpoint with a few + parameters: + + - ``client_id``: the client ID set in ``oidc_config.client_id`` + - ``response_type``: ``code`` + - ``redirect_uri``: the callback URL ; ``{base url}/_synapse/client/oidc/callback`` + - ``scope``: the list of scopes set in ``oidc_config.scopes`` + - ``state``: a random string + - ``nonce``: a random string + + In addition generating a redirect URL, we are setting a cookie with + a signed macaroon token containing the state, the nonce and the + client_redirect_url params. Those are then checked when the client + comes back from the provider. + + Args: + request: the incoming request from the browser. + We'll respond to it with a redirect and a cookie. + client_redirect_url: the URL that we should redirect the client to + when everything is done (or None for UI Auth) + ui_auth_session_id: The session ID of the ongoing UI Auth (or + None if this is a login). + + Returns: + The redirect URL to the authorization endpoint. + + """ + + state = generate_token() + nonce = generate_token() + + if not client_redirect_url: + client_redirect_url = b"" + + cookie = self._token_generator.generate_oidc_session_token( + state=state, + session_data=OidcSessionData( + idp_id=self.idp_id, + nonce=nonce, + client_redirect_url=client_redirect_url.decode(), + ui_auth_session_id=ui_auth_session_id or "", + ), + ) + + # Set the cookies. See the comments on _SESSION_COOKIES for why there are two. + # + # we have to build the header by hand rather than calling request.addCookie + # because the latter does not support SameSite=None + # (https://twistedmatrix.com/trac/ticket/10088) + + for cookie_name, options in _SESSION_COOKIES: + request.cookies.append( + b"%s=%s; Max-Age=3600; %s" + % (cookie_name, cookie.encode("utf-8"), options) + ) + + metadata = await self.load_metadata() + authorization_endpoint = metadata.get("authorization_endpoint") + return prepare_grant_uri( + authorization_endpoint, + client_id=self._client_auth.client_id, + response_type="code", + redirect_uri=self._callback_url, + scope=self._scopes, + state=state, + nonce=nonce, + ) + + async def handle_oidc_callback( + self, request: SynapseRequest, session_data: "OidcSessionData", code: str + ) -> None: + """Handle an incoming request to /_synapse/client/oidc/callback + + By this time we have already validated the session on the synapse side, and + now need to do the provider-specific operations. This includes: + + - exchange the code with the provider using the ``token_endpoint`` (see + ``_exchange_code``) + - once we have the token, use it to either extract the UserInfo from + the ``id_token`` (``_parse_id_token``), or use the ``access_token`` + to fetch UserInfo from the ``userinfo_endpoint`` + (``_fetch_userinfo``) + - map those UserInfo to a Matrix user (``_map_userinfo_to_user``) and + finish the login + + Args: + request: the incoming request from the browser. + session_data: the session data, extracted from our cookie + code: The authorization code we got from the callback. + """ + # Exchange the code with the provider + try: + logger.debug("Exchanging OAuth2 code for a token") + token = await self._exchange_code(code) + except OidcError as e: + logger.exception("Could not exchange OAuth2 code") + self._sso_handler.render_error(request, e.error, e.error_description) + return + + logger.debug("Successfully obtained OAuth2 token data: %r", token) + + # Now that we have a token, get the userinfo, either by decoding the + # `id_token` or by fetching the `userinfo_endpoint`. + if self._uses_userinfo: + try: + userinfo = await self._fetch_userinfo(token) + except Exception as e: + logger.exception("Could not fetch userinfo") + self._sso_handler.render_error(request, "fetch_error", str(e)) + return + else: + try: + userinfo = await self._parse_id_token(token, nonce=session_data.nonce) + except Exception as e: + logger.exception("Invalid id_token") + self._sso_handler.render_error(request, "invalid_token", str(e)) + return + + # first check if we're doing a UIA + if session_data.ui_auth_session_id: + try: + remote_user_id = self._remote_id_from_userinfo(userinfo) + except Exception as e: + logger.exception("Could not extract remote user id") + self._sso_handler.render_error(request, "mapping_error", str(e)) + return + + return await self._sso_handler.complete_sso_ui_auth_request( + self.idp_id, remote_user_id, session_data.ui_auth_session_id, request + ) + + # otherwise, it's a login + logger.debug("Userinfo for OIDC login: %s", userinfo) + + # Ensure that the attributes of the logged in user meet the required + # attributes by checking the userinfo against attribute_requirements + # In order to deal with the fact that OIDC userinfo can contain many + # types of data, we wrap non-list values in lists. + if not self._sso_handler.check_required_attributes( + request, + {k: v if isinstance(v, list) else [v] for k, v in userinfo.items()}, + self._oidc_attribute_requirements, + ): + return + + # Call the mapper to register/login the user + try: + await self._complete_oidc_login( + userinfo, token, request, session_data.client_redirect_url + ) + except MappingException as e: + logger.exception("Could not map user") + self._sso_handler.render_error(request, "mapping_error", str(e)) + + async def _complete_oidc_login( + self, + userinfo: UserInfo, + token: Token, + request: SynapseRequest, + client_redirect_url: str, + ) -> None: + """Given a UserInfo response, complete the login flow + + UserInfo should have a claim that uniquely identifies users. This claim + is usually `sub`, but can be configured with `oidc_config.subject_claim`. + It is then used as an `external_id`. + + If we don't find the user that way, we should register the user, + mapping the localpart and the display name from the UserInfo. + + If a user already exists with the mxid we've mapped and allow_existing_users + is disabled, raise an exception. + + Otherwise, render a redirect back to the client_redirect_url with a loginToken. + + Args: + userinfo: an object representing the user + token: a dict with the tokens obtained from the provider + request: The request to respond to + client_redirect_url: The redirect URL passed in by the client. + + Raises: + MappingException: if there was an error while mapping some properties + """ + try: + remote_user_id = self._remote_id_from_userinfo(userinfo) + except Exception as e: + raise MappingException( + "Failed to extract subject from OIDC response: %s" % (e,) + ) + + # Older mapping providers don't accept the `failures` argument, so we + # try and detect support. + mapper_signature = inspect.signature( + self._user_mapping_provider.map_user_attributes + ) + supports_failures = "failures" in mapper_signature.parameters + + async def oidc_response_to_user_attributes(failures: int) -> UserAttributes: + """ + Call the mapping provider to map the OIDC userinfo and token to user attributes. + + This is backwards compatibility for abstraction for the SSO handler. + """ + if supports_failures: + attributes = await self._user_mapping_provider.map_user_attributes( + userinfo, token, failures + ) + else: + # If the mapping provider does not support processing failures, + # do not continually generate the same Matrix ID since it will + # continue to already be in use. Note that the error raised is + # arbitrary and will get turned into a MappingException. + if failures: + raise MappingException( + "Mapping provider does not support de-duplicating Matrix IDs" + ) + + attributes = await self._user_mapping_provider.map_user_attributes( # type: ignore + userinfo, token + ) + + return UserAttributes(**attributes) + + async def grandfather_existing_users() -> Optional[str]: + if self._allow_existing_users: + # If allowing existing users we want to generate a single localpart + # and attempt to match it. + attributes = await oidc_response_to_user_attributes(failures=0) + + user_id = UserID(attributes.localpart, self._server_name).to_string() + users = await self._store.get_users_by_id_case_insensitive(user_id) + if users: + # If an existing matrix ID is returned, then use it. + if len(users) == 1: + previously_registered_user_id = next(iter(users)) + elif user_id in users: + previously_registered_user_id = user_id + else: + # Do not attempt to continue generating Matrix IDs. + raise MappingException( + "Attempted to login as '{}' but it matches more than one user inexactly: {}".format( + user_id, users + ) + ) + + return previously_registered_user_id + + return None + + # Mapping providers might not have get_extra_attributes: only call this + # method if it exists. + extra_attributes = None + get_extra_attributes = getattr( + self._user_mapping_provider, "get_extra_attributes", None + ) + if get_extra_attributes: + extra_attributes = await get_extra_attributes(userinfo, token) + + await self._sso_handler.complete_sso_login_request( + self.idp_id, + remote_user_id, + request, + client_redirect_url, + oidc_response_to_user_attributes, + grandfather_existing_users, + extra_attributes, + ) + + def _remote_id_from_userinfo(self, userinfo: UserInfo) -> str: + """Extract the unique remote id from an OIDC UserInfo block + + Args: + userinfo: An object representing the user given by the OIDC provider + Returns: + remote user id + """ + remote_user_id = self._user_mapping_provider.get_remote_user_id(userinfo) + # Some OIDC providers use integer IDs, but Synapse expects external IDs + # to be strings. + return str(remote_user_id) + + +# number of seconds a newly-generated client secret should be valid for +CLIENT_SECRET_VALIDITY_SECONDS = 3600 + +# minimum remaining validity on a client secret before we should generate a new one +CLIENT_SECRET_MIN_VALIDITY_SECONDS = 600 + + +class JwtClientSecret: + """A class which generates a new client secret on demand, based on a JWK + + This implementation is designed to comply with the requirements for Apple Sign in: + https://developer.apple.com/documentation/sign_in_with_apple/generate_and_validate_tokens#3262048 + + It looks like those requirements are based on https://tools.ietf.org/html/rfc7523, + but it's worth noting that we still put the generated secret in the "client_secret" + field (or rather, whereever client_auth_method puts it) rather than in a + client_assertion field in the body as that RFC seems to require. + """ + + def __init__( + self, + key: OidcProviderClientSecretJwtKey, + oauth_client_id: str, + oauth_issuer: str, + clock: Clock, + ): + self._key = key + self._oauth_client_id = oauth_client_id + self._oauth_issuer = oauth_issuer + self._clock = clock + self._cached_secret = b"" + self._cached_secret_replacement_time = 0 + + def __str__(self): + # if client_auth_method is client_secret_basic, then ClientAuth.prepare calls + # encode_client_secret_basic, which calls "{}".format(secret), which ends up + # here. + return self._get_secret().decode("ascii") + + def __bytes__(self): + # if client_auth_method is client_secret_post, then ClientAuth.prepare calls + # encode_client_secret_post, which ends up here. + return self._get_secret() + + def _get_secret(self) -> bytes: + now = self._clock.time() + + # if we have enough validity on our existing secret, use it + if now < self._cached_secret_replacement_time: + return self._cached_secret + + issued_at = int(now) + expires_at = issued_at + CLIENT_SECRET_VALIDITY_SECONDS + + # we copy the configured header because jwt.encode modifies it. + header = dict(self._key.jwt_header) + + # see https://tools.ietf.org/html/rfc7523#section-3 + payload = { + "sub": self._oauth_client_id, + "aud": self._oauth_issuer, + "iat": issued_at, + "exp": expires_at, + **self._key.jwt_payload, + } + logger.info( + "Generating new JWT for %s: %s %s", self._oauth_issuer, header, payload + ) + self._cached_secret = jwt.encode(header, payload, self._key.key) + self._cached_secret_replacement_time = ( + expires_at - CLIENT_SECRET_MIN_VALIDITY_SECONDS + ) + return self._cached_secret + + +class OidcSessionTokenGenerator: + """Methods for generating and checking OIDC Session cookies.""" + + def __init__(self, hs: "HomeServer"): + self._clock = hs.get_clock() + self._server_name = hs.hostname + self._macaroon_secret_key = hs.config.key.macaroon_secret_key + + def generate_oidc_session_token( + self, + state: str, + session_data: "OidcSessionData", + duration_in_ms: int = (60 * 60 * 1000), + ) -> str: + """Generates a signed token storing data about an OIDC session. + + When Synapse initiates an authorization flow, it creates a random state + and a random nonce. Those parameters are given to the provider and + should be verified when the client comes back from the provider. + It is also used to store the client_redirect_url, which is used to + complete the SSO login flow. + + Args: + state: The ``state`` parameter passed to the OIDC provider. + session_data: data to include in the session token. + duration_in_ms: An optional duration for the token in milliseconds. + Defaults to an hour. + + Returns: + A signed macaroon token with the session information. + """ + macaroon = pymacaroons.Macaroon( + location=self._server_name, + identifier="key", + key=self._macaroon_secret_key, + ) + macaroon.add_first_party_caveat("gen = 1") + macaroon.add_first_party_caveat("type = session") + macaroon.add_first_party_caveat("state = %s" % (state,)) + macaroon.add_first_party_caveat("idp_id = %s" % (session_data.idp_id,)) + macaroon.add_first_party_caveat("nonce = %s" % (session_data.nonce,)) + macaroon.add_first_party_caveat( + "client_redirect_url = %s" % (session_data.client_redirect_url,) + ) + macaroon.add_first_party_caveat( + "ui_auth_session_id = %s" % (session_data.ui_auth_session_id,) + ) + now = self._clock.time_msec() + expiry = now + duration_in_ms + macaroon.add_first_party_caveat("time < %d" % (expiry,)) + + return macaroon.serialize() + + def verify_oidc_session_token( + self, session: bytes, state: str + ) -> "OidcSessionData": + """Verifies and extract an OIDC session token. + + This verifies that a given session token was issued by this homeserver + and extract the nonce and client_redirect_url caveats. + + Args: + session: The session token to verify + state: The state the OIDC provider gave back + + Returns: + The data extracted from the session cookie + + Raises: + KeyError if an expected caveat is missing from the macaroon. + """ + macaroon = pymacaroons.Macaroon.deserialize(session) + + v = pymacaroons.Verifier() + v.satisfy_exact("gen = 1") + v.satisfy_exact("type = session") + v.satisfy_exact("state = %s" % (state,)) + v.satisfy_general(lambda c: c.startswith("nonce = ")) + v.satisfy_general(lambda c: c.startswith("idp_id = ")) + v.satisfy_general(lambda c: c.startswith("client_redirect_url = ")) + v.satisfy_general(lambda c: c.startswith("ui_auth_session_id = ")) + satisfy_expiry(v, self._clock.time_msec) + + v.verify(macaroon, self._macaroon_secret_key) + + # Extract the session data from the token. + nonce = get_value_from_macaroon(macaroon, "nonce") + idp_id = get_value_from_macaroon(macaroon, "idp_id") + client_redirect_url = get_value_from_macaroon(macaroon, "client_redirect_url") + ui_auth_session_id = get_value_from_macaroon(macaroon, "ui_auth_session_id") + return OidcSessionData( + nonce=nonce, + idp_id=idp_id, + client_redirect_url=client_redirect_url, + ui_auth_session_id=ui_auth_session_id, + ) + + +@attr.s(frozen=True, slots=True) +class OidcSessionData: + """The attributes which are stored in a OIDC session cookie""" + + # the Identity Provider being used + idp_id = attr.ib(type=str) + + # The `nonce` parameter passed to the OIDC provider. + nonce = attr.ib(type=str) + + # The URL the client gave when it initiated the flow. ("" if this is a UI Auth) + client_redirect_url = attr.ib(type=str) + + # The session ID of the ongoing UI Auth ("" if this is a login) + ui_auth_session_id = attr.ib(type=str) + + +UserAttributeDict = TypedDict( + "UserAttributeDict", + {"localpart": Optional[str], "display_name": Optional[str], "emails": List[str]}, +) +C = TypeVar("C") + + +class OidcMappingProvider(Generic[C]): + """A mapping provider maps a UserInfo object to user attributes. + + It should provide the API described by this class. + """ + + def __init__(self, config: C): + """ + Args: + config: A custom config object from this module, parsed by ``parse_config()`` + """ + + @staticmethod + def parse_config(config: dict) -> C: + """Parse the dict provided by the homeserver's config + + Args: + config: A dictionary containing configuration options for this provider + + Returns: + A custom config object for this module + """ + raise NotImplementedError() + + def get_remote_user_id(self, userinfo: UserInfo) -> str: + """Get a unique user ID for this user. + + Usually, in an OIDC-compliant scenario, it should be the ``sub`` claim from the UserInfo object. + + Args: + userinfo: An object representing the user given by the OIDC provider + + Returns: + A unique user ID + """ + raise NotImplementedError() + + async def map_user_attributes( + self, userinfo: UserInfo, token: Token, failures: int + ) -> UserAttributeDict: + """Map a `UserInfo` object into user attributes. + + Args: + userinfo: An object representing the user given by the OIDC provider + token: A dict with the tokens returned by the provider + failures: How many times a call to this function with this + UserInfo has resulted in a failure. + + Returns: + A dict containing the ``localpart`` and (optionally) the ``display_name`` + """ + raise NotImplementedError() + + async def get_extra_attributes(self, userinfo: UserInfo, token: Token) -> JsonDict: + """Map a `UserInfo` object into additional attributes passed to the client during login. + + Args: + userinfo: An object representing the user given by the OIDC provider + token: A dict with the tokens returned by the provider + + Returns: + A dict containing additional attributes. Must be JSON serializable. + """ + return {} + + +# Used to clear out "None" values in templates +def jinja_finalize(thing): + return thing if thing is not None else "" + + +env = Environment(finalize=jinja_finalize) + + +@attr.s(slots=True, frozen=True) +class JinjaOidcMappingConfig: + subject_claim = attr.ib(type=str) + localpart_template = attr.ib(type=Optional[Template]) + display_name_template = attr.ib(type=Optional[Template]) + email_template = attr.ib(type=Optional[Template]) + extra_attributes = attr.ib(type=Dict[str, Template]) + + +class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]): + """An implementation of a mapping provider based on Jinja templates. + + This is the default mapping provider. + """ + + def __init__(self, config: JinjaOidcMappingConfig): + self._config = config + + @staticmethod + def parse_config(config: dict) -> JinjaOidcMappingConfig: + subject_claim = config.get("subject_claim", "sub") + + def parse_template_config(option_name: str) -> Optional[Template]: + if option_name not in config: + return None + try: + return env.from_string(config[option_name]) + except Exception as e: + raise ConfigError("invalid jinja template", path=[option_name]) from e + + localpart_template = parse_template_config("localpart_template") + display_name_template = parse_template_config("display_name_template") + email_template = parse_template_config("email_template") + + extra_attributes = {} # type Dict[str, Template] + if "extra_attributes" in config: + extra_attributes_config = config.get("extra_attributes") or {} + if not isinstance(extra_attributes_config, dict): + raise ConfigError("must be a dict", path=["extra_attributes"]) + + for key, value in extra_attributes_config.items(): + try: + extra_attributes[key] = env.from_string(value) + except Exception as e: + raise ConfigError( + "invalid jinja template", path=["extra_attributes", key] + ) from e + + return JinjaOidcMappingConfig( + subject_claim=subject_claim, + localpart_template=localpart_template, + display_name_template=display_name_template, + email_template=email_template, + extra_attributes=extra_attributes, + ) + + def get_remote_user_id(self, userinfo: UserInfo) -> str: + return userinfo[self._config.subject_claim] + + async def map_user_attributes( + self, userinfo: UserInfo, token: Token, failures: int + ) -> UserAttributeDict: + localpart = None + + if self._config.localpart_template: + localpart = self._config.localpart_template.render(user=userinfo).strip() + + # Ensure only valid characters are included in the MXID. + localpart = map_username_to_mxid_localpart(localpart) + + # Append suffix integer if last call to this function failed to produce + # a usable mxid. + localpart += str(failures) if failures else "" + + def render_template_field(template: Optional[Template]) -> Optional[str]: + if template is None: + return None + return template.render(user=userinfo).strip() + + display_name = render_template_field(self._config.display_name_template) + if display_name == "": + display_name = None + + emails = [] # type: List[str] + email = render_template_field(self._config.email_template) + if email: + emails.append(email) + + return UserAttributeDict( + localpart=localpart, display_name=display_name, emails=emails + ) + + async def get_extra_attributes(self, userinfo: UserInfo, token: Token) -> JsonDict: + extras = {} # type: Dict[str, str] + for key, template in self._config.extra_attributes.items(): + try: + extras[key] = template.render(user=userinfo).strip() + except Exception as e: + # Log an error and skip this value (don't break login for this). + logger.error("Failed to render OIDC extra attribute %s: %s" % (key, e)) + return extras diff --git a/synapse/handlers/oidc_handler.py b/synapse/handlers/oidc_handler.py deleted file mode 100644 index b156196a70..0000000000 --- a/synapse/handlers/oidc_handler.py +++ /dev/null @@ -1,1387 +0,0 @@ -# Copyright 2020 Quentin Gliech -# Copyright 2021 The Matrix.org Foundation C.I.C. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import inspect -import logging -from typing import TYPE_CHECKING, Dict, Generic, List, Optional, TypeVar, Union -from urllib.parse import urlencode - -import attr -import pymacaroons -from authlib.common.security import generate_token -from authlib.jose import JsonWebToken, jwt -from authlib.oauth2.auth import ClientAuth -from authlib.oauth2.rfc6749.parameters import prepare_grant_uri -from authlib.oidc.core import CodeIDToken, ImplicitIDToken, UserInfo -from authlib.oidc.discovery import OpenIDProviderMetadata, get_well_known_url -from jinja2 import Environment, Template -from pymacaroons.exceptions import ( - MacaroonDeserializationException, - MacaroonInitException, - MacaroonInvalidSignatureException, -) -from typing_extensions import TypedDict - -from twisted.web.client import readBody -from twisted.web.http_headers import Headers - -from synapse.config import ConfigError -from synapse.config.oidc_config import ( - OidcProviderClientSecretJwtKey, - OidcProviderConfig, -) -from synapse.handlers.sso import MappingException, UserAttributes -from synapse.http.site import SynapseRequest -from synapse.logging.context import make_deferred_yieldable -from synapse.types import JsonDict, UserID, map_username_to_mxid_localpart -from synapse.util import Clock, json_decoder -from synapse.util.caches.cached_call import RetryOnExceptionCachedCall -from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry - -if TYPE_CHECKING: - from synapse.server import HomeServer - -logger = logging.getLogger(__name__) - -# we want the cookie to be returned to us even when the request is the POSTed -# result of a form on another domain, as is used with `response_mode=form_post`. -# -# Modern browsers will not do so unless we set SameSite=None; however *older* -# browsers (including all versions of Safari on iOS 12?) don't support -# SameSite=None, and interpret it as SameSite=Strict: -# https://bugs.webkit.org/show_bug.cgi?id=198181 -# -# As a rather painful workaround, we set *two* cookies, one with SameSite=None -# and one with no SameSite, in the hope that at least one of them will get -# back to us. -# -# Secure is necessary for SameSite=None (and, empirically, also breaks things -# on iOS 12.) -# -# Here we have the names of the cookies, and the options we use to set them. -_SESSION_COOKIES = [ - (b"oidc_session", b"Path=/_synapse/client/oidc; HttpOnly; Secure; SameSite=None"), - (b"oidc_session_no_samesite", b"Path=/_synapse/client/oidc; HttpOnly"), -] - -#: A token exchanged from the token endpoint, as per RFC6749 sec 5.1. and -#: OpenID.Core sec 3.1.3.3. -Token = TypedDict( - "Token", - { - "access_token": str, - "token_type": str, - "id_token": Optional[str], - "refresh_token": Optional[str], - "expires_in": int, - "scope": Optional[str], - }, -) - -#: A JWK, as per RFC7517 sec 4. The type could be more precise than that, but -#: there is no real point of doing this in our case. -JWK = Dict[str, str] - -#: A JWK Set, as per RFC7517 sec 5. -JWKS = TypedDict("JWKS", {"keys": List[JWK]}) - - -class OidcHandler: - """Handles requests related to the OpenID Connect login flow.""" - - def __init__(self, hs: "HomeServer"): - self._sso_handler = hs.get_sso_handler() - - provider_confs = hs.config.oidc.oidc_providers - # we should not have been instantiated if there is no configured provider. - assert provider_confs - - self._token_generator = OidcSessionTokenGenerator(hs) - self._providers = { - p.idp_id: OidcProvider(hs, self._token_generator, p) for p in provider_confs - } # type: Dict[str, OidcProvider] - - async def load_metadata(self) -> None: - """Validate the config and load the metadata from the remote endpoint. - - Called at startup to ensure we have everything we need. - """ - for idp_id, p in self._providers.items(): - try: - await p.load_metadata() - await p.load_jwks() - except Exception as e: - raise Exception( - "Error while initialising OIDC provider %r" % (idp_id,) - ) from e - - async def handle_oidc_callback(self, request: SynapseRequest) -> None: - """Handle an incoming request to /_synapse/client/oidc/callback - - Since we might want to display OIDC-related errors in a user-friendly - way, we don't raise SynapseError from here. Instead, we call - ``self._sso_handler.render_error`` which displays an HTML page for the error. - - Most of the OpenID Connect logic happens here: - - - first, we check if there was any error returned by the provider and - display it - - then we fetch the session cookie, decode and verify it - - the ``state`` query parameter should match with the one stored in the - session cookie - - Once we know the session is legit, we then delegate to the OIDC Provider - implementation, which will exchange the code with the provider and complete the - login/authentication. - - Args: - request: the incoming request from the browser. - """ - # This will always be set by the time Twisted calls us. - assert request.args is not None - - # The provider might redirect with an error. - # In that case, just display it as-is. - if b"error" in request.args: - # error response from the auth server. see: - # https://tools.ietf.org/html/rfc6749#section-4.1.2.1 - # https://openid.net/specs/openid-connect-core-1_0.html#AuthError - error = request.args[b"error"][0].decode() - description = request.args.get(b"error_description", [b""])[0].decode() - - # Most of the errors returned by the provider could be due by - # either the provider misbehaving or Synapse being misconfigured. - # The only exception of that is "access_denied", where the user - # probably cancelled the login flow. In other cases, log those errors. - logger.log( - logging.INFO if error == "access_denied" else logging.ERROR, - "Received OIDC callback with error: %s %s", - error, - description, - ) - - self._sso_handler.render_error(request, error, description) - return - - # otherwise, it is presumably a successful response. see: - # https://tools.ietf.org/html/rfc6749#section-4.1.2 - - # Fetch the session cookie. See the comments on SESSION_COOKIES for why there - # are two. - - for cookie_name, _ in _SESSION_COOKIES: - session = request.getCookie(cookie_name) # type: Optional[bytes] - if session is not None: - break - else: - logger.info("Received OIDC callback, with no session cookie") - self._sso_handler.render_error( - request, "missing_session", "No session cookie found" - ) - return - - # Remove the cookies. There is a good chance that if the callback failed - # once, it will fail next time and the code will already be exchanged. - # Removing the cookies early avoids spamming the provider with token requests. - # - # we have to build the header by hand rather than calling request.addCookie - # because the latter does not support SameSite=None - # (https://twistedmatrix.com/trac/ticket/10088) - - for cookie_name, options in _SESSION_COOKIES: - request.cookies.append( - b"%s=; Expires=Thu, Jan 01 1970 00:00:00 UTC; %s" - % (cookie_name, options) - ) - - # Check for the state query parameter - if b"state" not in request.args: - logger.info("Received OIDC callback, with no state parameter") - self._sso_handler.render_error( - request, "invalid_request", "State parameter is missing" - ) - return - - state = request.args[b"state"][0].decode() - - # Deserialize the session token and verify it. - try: - session_data = self._token_generator.verify_oidc_session_token( - session, state - ) - except (MacaroonInitException, MacaroonDeserializationException, KeyError) as e: - logger.exception("Invalid session for OIDC callback") - self._sso_handler.render_error(request, "invalid_session", str(e)) - return - except MacaroonInvalidSignatureException as e: - logger.exception("Could not verify session for OIDC callback") - self._sso_handler.render_error(request, "mismatching_session", str(e)) - return - - logger.info("Received OIDC callback for IdP %s", session_data.idp_id) - - oidc_provider = self._providers.get(session_data.idp_id) - if not oidc_provider: - logger.error("OIDC session uses unknown IdP %r", oidc_provider) - self._sso_handler.render_error(request, "unknown_idp", "Unknown IdP") - return - - if b"code" not in request.args: - logger.info("Code parameter is missing") - self._sso_handler.render_error( - request, "invalid_request", "Code parameter is missing" - ) - return - - code = request.args[b"code"][0].decode() - - await oidc_provider.handle_oidc_callback(request, session_data, code) - - -class OidcError(Exception): - """Used to catch errors when calling the token_endpoint""" - - def __init__(self, error, error_description=None): - self.error = error - self.error_description = error_description - - def __str__(self): - if self.error_description: - return "{}: {}".format(self.error, self.error_description) - return self.error - - -class OidcProvider: - """Wraps the config for a single OIDC IdentityProvider - - Provides methods for handling redirect requests and callbacks via that particular - IdP. - """ - - def __init__( - self, - hs: "HomeServer", - token_generator: "OidcSessionTokenGenerator", - provider: OidcProviderConfig, - ): - self._store = hs.get_datastore() - - self._token_generator = token_generator - - self._config = provider - self._callback_url = hs.config.oidc_callback_url # type: str - - self._oidc_attribute_requirements = provider.attribute_requirements - self._scopes = provider.scopes - self._user_profile_method = provider.user_profile_method - - client_secret = None # type: Union[None, str, JwtClientSecret] - if provider.client_secret: - client_secret = provider.client_secret - elif provider.client_secret_jwt_key: - client_secret = JwtClientSecret( - provider.client_secret_jwt_key, - provider.client_id, - provider.issuer, - hs.get_clock(), - ) - - self._client_auth = ClientAuth( - provider.client_id, - client_secret, - provider.client_auth_method, - ) # type: ClientAuth - self._client_auth_method = provider.client_auth_method - - # cache of metadata for the identity provider (endpoint uris, mostly). This is - # loaded on-demand from the discovery endpoint (if discovery is enabled), with - # possible overrides from the config. Access via `load_metadata`. - self._provider_metadata = RetryOnExceptionCachedCall(self._load_metadata) - - # cache of JWKs used by the identity provider to sign tokens. Loaded on demand - # from the IdP's jwks_uri, if required. - self._jwks = RetryOnExceptionCachedCall(self._load_jwks) - - self._user_mapping_provider = provider.user_mapping_provider_class( - provider.user_mapping_provider_config - ) - self._skip_verification = provider.skip_verification - self._allow_existing_users = provider.allow_existing_users - - self._http_client = hs.get_proxied_http_client() - self._server_name = hs.config.server_name # type: str - - # identifier for the external_ids table - self.idp_id = provider.idp_id - - # user-facing name of this auth provider - self.idp_name = provider.idp_name - - # MXC URI for icon for this auth provider - self.idp_icon = provider.idp_icon - - # optional brand identifier for this auth provider - self.idp_brand = provider.idp_brand - - # Optional brand identifier for the unstable API (see MSC2858). - self.unstable_idp_brand = provider.unstable_idp_brand - - self._sso_handler = hs.get_sso_handler() - - self._sso_handler.register_identity_provider(self) - - def _validate_metadata(self, m: OpenIDProviderMetadata) -> None: - """Verifies the provider metadata. - - This checks the validity of the currently loaded provider. Not - everything is checked, only: - - - ``issuer`` - - ``authorization_endpoint`` - - ``token_endpoint`` - - ``response_types_supported`` (checks if "code" is in it) - - ``jwks_uri`` - - Raises: - ValueError: if something in the provider is not valid - """ - # Skip verification to allow non-compliant providers (e.g. issuers not running on a secure origin) - if self._skip_verification is True: - return - - m.validate_issuer() - m.validate_authorization_endpoint() - m.validate_token_endpoint() - - if m.get("token_endpoint_auth_methods_supported") is not None: - m.validate_token_endpoint_auth_methods_supported() - if ( - self._client_auth_method - not in m["token_endpoint_auth_methods_supported"] - ): - raise ValueError( - '"{auth_method}" not in "token_endpoint_auth_methods_supported" ({supported!r})'.format( - auth_method=self._client_auth_method, - supported=m["token_endpoint_auth_methods_supported"], - ) - ) - - if m.get("response_types_supported") is not None: - m.validate_response_types_supported() - - if "code" not in m["response_types_supported"]: - raise ValueError( - '"code" not in "response_types_supported" (%r)' - % (m["response_types_supported"],) - ) - - # Ensure there's a userinfo endpoint to fetch from if it is required. - if self._uses_userinfo: - if m.get("userinfo_endpoint") is None: - raise ValueError( - 'provider has no "userinfo_endpoint", even though it is required' - ) - else: - # If we're not using userinfo, we need a valid jwks to validate the ID token - m.validate_jwks_uri() - - @property - def _uses_userinfo(self) -> bool: - """Returns True if the ``userinfo_endpoint`` should be used. - - This is based on the requested scopes: if the scopes include - ``openid``, the provider should give use an ID token containing the - user information. If not, we should fetch them using the - ``access_token`` with the ``userinfo_endpoint``. - """ - - return ( - "openid" not in self._scopes - or self._user_profile_method == "userinfo_endpoint" - ) - - async def load_metadata(self, force: bool = False) -> OpenIDProviderMetadata: - """Return the provider metadata. - - If this is the first call, the metadata is built from the config and from the - metadata discovery endpoint (if enabled), and then validated. If the metadata - is successfully validated, it is then cached for future use. - - Args: - force: If true, any cached metadata is discarded to force a reload. - - Raises: - ValueError: if something in the provider is not valid - - Returns: - The provider's metadata. - """ - if force: - # reset the cached call to ensure we get a new result - self._provider_metadata = RetryOnExceptionCachedCall(self._load_metadata) - - return await self._provider_metadata.get() - - async def _load_metadata(self) -> OpenIDProviderMetadata: - # start out with just the issuer (unlike the other settings, discovered issuer - # takes precedence over configured issuer, because configured issuer is - # required for discovery to take place.) - # - metadata = OpenIDProviderMetadata(issuer=self._config.issuer) - - # load any data from the discovery endpoint, if enabled - if self._config.discover: - url = get_well_known_url(self._config.issuer, external=True) - metadata_response = await self._http_client.get_json(url) - metadata.update(metadata_response) - - # override any discovered data with any settings in our config - if self._config.authorization_endpoint: - metadata["authorization_endpoint"] = self._config.authorization_endpoint - - if self._config.token_endpoint: - metadata["token_endpoint"] = self._config.token_endpoint - - if self._config.userinfo_endpoint: - metadata["userinfo_endpoint"] = self._config.userinfo_endpoint - - if self._config.jwks_uri: - metadata["jwks_uri"] = self._config.jwks_uri - - self._validate_metadata(metadata) - - return metadata - - async def load_jwks(self, force: bool = False) -> JWKS: - """Load the JSON Web Key Set used to sign ID tokens. - - If we're not using the ``userinfo_endpoint``, user infos are extracted - from the ID token, which is a JWT signed by keys given by the provider. - The keys are then cached. - - Args: - force: Force reloading the keys. - - Returns: - The key set - - Looks like this:: - - { - 'keys': [ - { - 'kid': 'abcdef', - 'kty': 'RSA', - 'alg': 'RS256', - 'use': 'sig', - 'e': 'XXXX', - 'n': 'XXXX', - } - ] - } - """ - if force: - # reset the cached call to ensure we get a new result - self._jwks = RetryOnExceptionCachedCall(self._load_jwks) - return await self._jwks.get() - - async def _load_jwks(self) -> JWKS: - if self._uses_userinfo: - # We're not using jwt signing, return an empty jwk set - return {"keys": []} - - metadata = await self.load_metadata() - - # Load the JWKS using the `jwks_uri` metadata. - uri = metadata.get("jwks_uri") - if not uri: - # this should be unreachable: load_metadata validates that - # there is a jwks_uri in the metadata if _uses_userinfo is unset - raise RuntimeError('Missing "jwks_uri" in metadata') - - jwk_set = await self._http_client.get_json(uri) - - return jwk_set - - async def _exchange_code(self, code: str) -> Token: - """Exchange an authorization code for a token. - - This calls the ``token_endpoint`` with the authorization code we - received in the callback to exchange it for a token. The call uses the - ``ClientAuth`` to authenticate with the client with its ID and secret. - - See: - https://tools.ietf.org/html/rfc6749#section-3.2 - https://openid.net/specs/openid-connect-core-1_0.html#TokenEndpoint - - Args: - code: The authorization code we got from the callback. - - Returns: - A dict containing various tokens. - - May look like this:: - - { - 'token_type': 'bearer', - 'access_token': 'abcdef', - 'expires_in': 3599, - 'id_token': 'ghijkl', - 'refresh_token': 'mnopqr', - } - - Raises: - OidcError: when the ``token_endpoint`` returned an error. - """ - metadata = await self.load_metadata() - token_endpoint = metadata.get("token_endpoint") - raw_headers = { - "Content-Type": "application/x-www-form-urlencoded", - "User-Agent": self._http_client.user_agent, - "Accept": "application/json", - } - - args = { - "grant_type": "authorization_code", - "code": code, - "redirect_uri": self._callback_url, - } - body = urlencode(args, True) - - # Fill the body/headers with credentials - uri, raw_headers, body = self._client_auth.prepare( - method="POST", uri=token_endpoint, headers=raw_headers, body=body - ) - headers = Headers({k: [v] for (k, v) in raw_headers.items()}) - - # Do the actual request - # We're not using the SimpleHttpClient util methods as we don't want to - # check the HTTP status code and we do the body encoding ourself. - response = await self._http_client.request( - method="POST", - uri=uri, - data=body.encode("utf-8"), - headers=headers, - ) - - # This is used in multiple error messages below - status = "{code} {phrase}".format( - code=response.code, phrase=response.phrase.decode("utf-8") - ) - - resp_body = await make_deferred_yieldable(readBody(response)) - - if response.code >= 500: - # In case of a server error, we should first try to decode the body - # and check for an error field. If not, we respond with a generic - # error message. - try: - resp = json_decoder.decode(resp_body.decode("utf-8")) - error = resp["error"] - description = resp.get("error_description", error) - except (ValueError, KeyError): - # Catch ValueError for the JSON decoding and KeyError for the "error" field - error = "server_error" - description = ( - ( - 'Authorization server responded with a "{status}" error ' - "while exchanging the authorization code." - ).format(status=status), - ) - - raise OidcError(error, description) - - # Since it is a not a 5xx code, body should be a valid JSON. It will - # raise if not. - resp = json_decoder.decode(resp_body.decode("utf-8")) - - if "error" in resp: - error = resp["error"] - # In case the authorization server responded with an error field, - # it should be a 4xx code. If not, warn about it but don't do - # anything special and report the original error message. - if response.code < 400: - logger.debug( - "Invalid response from the authorization server: " - 'responded with a "{status}" ' - "but body has an error field: {error!r}".format( - status=status, error=resp["error"] - ) - ) - - description = resp.get("error_description", error) - raise OidcError(error, description) - - # Now, this should not be an error. According to RFC6749 sec 5.1, it - # should be a 200 code. We're a bit more flexible than that, and will - # only throw on a 4xx code. - if response.code >= 400: - description = ( - 'Authorization server responded with a "{status}" error ' - 'but did not include an "error" field in its response.'.format( - status=status - ) - ) - logger.warning(description) - # Body was still valid JSON. Might be useful to log it for debugging. - logger.warning("Code exchange response: {resp!r}".format(resp=resp)) - raise OidcError("server_error", description) - - return resp - - async def _fetch_userinfo(self, token: Token) -> UserInfo: - """Fetch user information from the ``userinfo_endpoint``. - - Args: - token: the token given by the ``token_endpoint``. - Must include an ``access_token`` field. - - Returns: - UserInfo: an object representing the user. - """ - logger.debug("Using the OAuth2 access_token to request userinfo") - metadata = await self.load_metadata() - - resp = await self._http_client.get_json( - metadata["userinfo_endpoint"], - headers={"Authorization": ["Bearer {}".format(token["access_token"])]}, - ) - - logger.debug("Retrieved user info from userinfo endpoint: %r", resp) - - return UserInfo(resp) - - async def _parse_id_token(self, token: Token, nonce: str) -> UserInfo: - """Return an instance of UserInfo from token's ``id_token``. - - Args: - token: the token given by the ``token_endpoint``. - Must include an ``id_token`` field. - nonce: the nonce value originally sent in the initial authorization - request. This value should match the one inside the token. - - Returns: - An object representing the user. - """ - metadata = await self.load_metadata() - claims_params = { - "nonce": nonce, - "client_id": self._client_auth.client_id, - } - if "access_token" in token: - # If we got an `access_token`, there should be an `at_hash` claim - # in the `id_token` that we can check against. - claims_params["access_token"] = token["access_token"] - claims_cls = CodeIDToken - else: - claims_cls = ImplicitIDToken - - alg_values = metadata.get("id_token_signing_alg_values_supported", ["RS256"]) - jwt = JsonWebToken(alg_values) - - claim_options = {"iss": {"values": [metadata["issuer"]]}} - - id_token = token["id_token"] - logger.debug("Attempting to decode JWT id_token %r", id_token) - - # Try to decode the keys in cache first, then retry by forcing the keys - # to be reloaded - jwk_set = await self.load_jwks() - try: - claims = jwt.decode( - id_token, - key=jwk_set, - claims_cls=claims_cls, - claims_options=claim_options, - claims_params=claims_params, - ) - except ValueError: - logger.info("Reloading JWKS after decode error") - jwk_set = await self.load_jwks(force=True) # try reloading the jwks - claims = jwt.decode( - id_token, - key=jwk_set, - claims_cls=claims_cls, - claims_options=claim_options, - claims_params=claims_params, - ) - - logger.debug("Decoded id_token JWT %r; validating", claims) - - claims.validate(leeway=120) # allows 2 min of clock skew - return UserInfo(claims) - - async def handle_redirect_request( - self, - request: SynapseRequest, - client_redirect_url: Optional[bytes], - ui_auth_session_id: Optional[str] = None, - ) -> str: - """Handle an incoming request to /login/sso/redirect - - It returns a redirect to the authorization endpoint with a few - parameters: - - - ``client_id``: the client ID set in ``oidc_config.client_id`` - - ``response_type``: ``code`` - - ``redirect_uri``: the callback URL ; ``{base url}/_synapse/client/oidc/callback`` - - ``scope``: the list of scopes set in ``oidc_config.scopes`` - - ``state``: a random string - - ``nonce``: a random string - - In addition generating a redirect URL, we are setting a cookie with - a signed macaroon token containing the state, the nonce and the - client_redirect_url params. Those are then checked when the client - comes back from the provider. - - Args: - request: the incoming request from the browser. - We'll respond to it with a redirect and a cookie. - client_redirect_url: the URL that we should redirect the client to - when everything is done (or None for UI Auth) - ui_auth_session_id: The session ID of the ongoing UI Auth (or - None if this is a login). - - Returns: - The redirect URL to the authorization endpoint. - - """ - - state = generate_token() - nonce = generate_token() - - if not client_redirect_url: - client_redirect_url = b"" - - cookie = self._token_generator.generate_oidc_session_token( - state=state, - session_data=OidcSessionData( - idp_id=self.idp_id, - nonce=nonce, - client_redirect_url=client_redirect_url.decode(), - ui_auth_session_id=ui_auth_session_id or "", - ), - ) - - # Set the cookies. See the comments on _SESSION_COOKIES for why there are two. - # - # we have to build the header by hand rather than calling request.addCookie - # because the latter does not support SameSite=None - # (https://twistedmatrix.com/trac/ticket/10088) - - for cookie_name, options in _SESSION_COOKIES: - request.cookies.append( - b"%s=%s; Max-Age=3600; %s" - % (cookie_name, cookie.encode("utf-8"), options) - ) - - metadata = await self.load_metadata() - authorization_endpoint = metadata.get("authorization_endpoint") - return prepare_grant_uri( - authorization_endpoint, - client_id=self._client_auth.client_id, - response_type="code", - redirect_uri=self._callback_url, - scope=self._scopes, - state=state, - nonce=nonce, - ) - - async def handle_oidc_callback( - self, request: SynapseRequest, session_data: "OidcSessionData", code: str - ) -> None: - """Handle an incoming request to /_synapse/client/oidc/callback - - By this time we have already validated the session on the synapse side, and - now need to do the provider-specific operations. This includes: - - - exchange the code with the provider using the ``token_endpoint`` (see - ``_exchange_code``) - - once we have the token, use it to either extract the UserInfo from - the ``id_token`` (``_parse_id_token``), or use the ``access_token`` - to fetch UserInfo from the ``userinfo_endpoint`` - (``_fetch_userinfo``) - - map those UserInfo to a Matrix user (``_map_userinfo_to_user``) and - finish the login - - Args: - request: the incoming request from the browser. - session_data: the session data, extracted from our cookie - code: The authorization code we got from the callback. - """ - # Exchange the code with the provider - try: - logger.debug("Exchanging OAuth2 code for a token") - token = await self._exchange_code(code) - except OidcError as e: - logger.exception("Could not exchange OAuth2 code") - self._sso_handler.render_error(request, e.error, e.error_description) - return - - logger.debug("Successfully obtained OAuth2 token data: %r", token) - - # Now that we have a token, get the userinfo, either by decoding the - # `id_token` or by fetching the `userinfo_endpoint`. - if self._uses_userinfo: - try: - userinfo = await self._fetch_userinfo(token) - except Exception as e: - logger.exception("Could not fetch userinfo") - self._sso_handler.render_error(request, "fetch_error", str(e)) - return - else: - try: - userinfo = await self._parse_id_token(token, nonce=session_data.nonce) - except Exception as e: - logger.exception("Invalid id_token") - self._sso_handler.render_error(request, "invalid_token", str(e)) - return - - # first check if we're doing a UIA - if session_data.ui_auth_session_id: - try: - remote_user_id = self._remote_id_from_userinfo(userinfo) - except Exception as e: - logger.exception("Could not extract remote user id") - self._sso_handler.render_error(request, "mapping_error", str(e)) - return - - return await self._sso_handler.complete_sso_ui_auth_request( - self.idp_id, remote_user_id, session_data.ui_auth_session_id, request - ) - - # otherwise, it's a login - logger.debug("Userinfo for OIDC login: %s", userinfo) - - # Ensure that the attributes of the logged in user meet the required - # attributes by checking the userinfo against attribute_requirements - # In order to deal with the fact that OIDC userinfo can contain many - # types of data, we wrap non-list values in lists. - if not self._sso_handler.check_required_attributes( - request, - {k: v if isinstance(v, list) else [v] for k, v in userinfo.items()}, - self._oidc_attribute_requirements, - ): - return - - # Call the mapper to register/login the user - try: - await self._complete_oidc_login( - userinfo, token, request, session_data.client_redirect_url - ) - except MappingException as e: - logger.exception("Could not map user") - self._sso_handler.render_error(request, "mapping_error", str(e)) - - async def _complete_oidc_login( - self, - userinfo: UserInfo, - token: Token, - request: SynapseRequest, - client_redirect_url: str, - ) -> None: - """Given a UserInfo response, complete the login flow - - UserInfo should have a claim that uniquely identifies users. This claim - is usually `sub`, but can be configured with `oidc_config.subject_claim`. - It is then used as an `external_id`. - - If we don't find the user that way, we should register the user, - mapping the localpart and the display name from the UserInfo. - - If a user already exists with the mxid we've mapped and allow_existing_users - is disabled, raise an exception. - - Otherwise, render a redirect back to the client_redirect_url with a loginToken. - - Args: - userinfo: an object representing the user - token: a dict with the tokens obtained from the provider - request: The request to respond to - client_redirect_url: The redirect URL passed in by the client. - - Raises: - MappingException: if there was an error while mapping some properties - """ - try: - remote_user_id = self._remote_id_from_userinfo(userinfo) - except Exception as e: - raise MappingException( - "Failed to extract subject from OIDC response: %s" % (e,) - ) - - # Older mapping providers don't accept the `failures` argument, so we - # try and detect support. - mapper_signature = inspect.signature( - self._user_mapping_provider.map_user_attributes - ) - supports_failures = "failures" in mapper_signature.parameters - - async def oidc_response_to_user_attributes(failures: int) -> UserAttributes: - """ - Call the mapping provider to map the OIDC userinfo and token to user attributes. - - This is backwards compatibility for abstraction for the SSO handler. - """ - if supports_failures: - attributes = await self._user_mapping_provider.map_user_attributes( - userinfo, token, failures - ) - else: - # If the mapping provider does not support processing failures, - # do not continually generate the same Matrix ID since it will - # continue to already be in use. Note that the error raised is - # arbitrary and will get turned into a MappingException. - if failures: - raise MappingException( - "Mapping provider does not support de-duplicating Matrix IDs" - ) - - attributes = await self._user_mapping_provider.map_user_attributes( # type: ignore - userinfo, token - ) - - return UserAttributes(**attributes) - - async def grandfather_existing_users() -> Optional[str]: - if self._allow_existing_users: - # If allowing existing users we want to generate a single localpart - # and attempt to match it. - attributes = await oidc_response_to_user_attributes(failures=0) - - user_id = UserID(attributes.localpart, self._server_name).to_string() - users = await self._store.get_users_by_id_case_insensitive(user_id) - if users: - # If an existing matrix ID is returned, then use it. - if len(users) == 1: - previously_registered_user_id = next(iter(users)) - elif user_id in users: - previously_registered_user_id = user_id - else: - # Do not attempt to continue generating Matrix IDs. - raise MappingException( - "Attempted to login as '{}' but it matches more than one user inexactly: {}".format( - user_id, users - ) - ) - - return previously_registered_user_id - - return None - - # Mapping providers might not have get_extra_attributes: only call this - # method if it exists. - extra_attributes = None - get_extra_attributes = getattr( - self._user_mapping_provider, "get_extra_attributes", None - ) - if get_extra_attributes: - extra_attributes = await get_extra_attributes(userinfo, token) - - await self._sso_handler.complete_sso_login_request( - self.idp_id, - remote_user_id, - request, - client_redirect_url, - oidc_response_to_user_attributes, - grandfather_existing_users, - extra_attributes, - ) - - def _remote_id_from_userinfo(self, userinfo: UserInfo) -> str: - """Extract the unique remote id from an OIDC UserInfo block - - Args: - userinfo: An object representing the user given by the OIDC provider - Returns: - remote user id - """ - remote_user_id = self._user_mapping_provider.get_remote_user_id(userinfo) - # Some OIDC providers use integer IDs, but Synapse expects external IDs - # to be strings. - return str(remote_user_id) - - -# number of seconds a newly-generated client secret should be valid for -CLIENT_SECRET_VALIDITY_SECONDS = 3600 - -# minimum remaining validity on a client secret before we should generate a new one -CLIENT_SECRET_MIN_VALIDITY_SECONDS = 600 - - -class JwtClientSecret: - """A class which generates a new client secret on demand, based on a JWK - - This implementation is designed to comply with the requirements for Apple Sign in: - https://developer.apple.com/documentation/sign_in_with_apple/generate_and_validate_tokens#3262048 - - It looks like those requirements are based on https://tools.ietf.org/html/rfc7523, - but it's worth noting that we still put the generated secret in the "client_secret" - field (or rather, whereever client_auth_method puts it) rather than in a - client_assertion field in the body as that RFC seems to require. - """ - - def __init__( - self, - key: OidcProviderClientSecretJwtKey, - oauth_client_id: str, - oauth_issuer: str, - clock: Clock, - ): - self._key = key - self._oauth_client_id = oauth_client_id - self._oauth_issuer = oauth_issuer - self._clock = clock - self._cached_secret = b"" - self._cached_secret_replacement_time = 0 - - def __str__(self): - # if client_auth_method is client_secret_basic, then ClientAuth.prepare calls - # encode_client_secret_basic, which calls "{}".format(secret), which ends up - # here. - return self._get_secret().decode("ascii") - - def __bytes__(self): - # if client_auth_method is client_secret_post, then ClientAuth.prepare calls - # encode_client_secret_post, which ends up here. - return self._get_secret() - - def _get_secret(self) -> bytes: - now = self._clock.time() - - # if we have enough validity on our existing secret, use it - if now < self._cached_secret_replacement_time: - return self._cached_secret - - issued_at = int(now) - expires_at = issued_at + CLIENT_SECRET_VALIDITY_SECONDS - - # we copy the configured header because jwt.encode modifies it. - header = dict(self._key.jwt_header) - - # see https://tools.ietf.org/html/rfc7523#section-3 - payload = { - "sub": self._oauth_client_id, - "aud": self._oauth_issuer, - "iat": issued_at, - "exp": expires_at, - **self._key.jwt_payload, - } - logger.info( - "Generating new JWT for %s: %s %s", self._oauth_issuer, header, payload - ) - self._cached_secret = jwt.encode(header, payload, self._key.key) - self._cached_secret_replacement_time = ( - expires_at - CLIENT_SECRET_MIN_VALIDITY_SECONDS - ) - return self._cached_secret - - -class OidcSessionTokenGenerator: - """Methods for generating and checking OIDC Session cookies.""" - - def __init__(self, hs: "HomeServer"): - self._clock = hs.get_clock() - self._server_name = hs.hostname - self._macaroon_secret_key = hs.config.key.macaroon_secret_key - - def generate_oidc_session_token( - self, - state: str, - session_data: "OidcSessionData", - duration_in_ms: int = (60 * 60 * 1000), - ) -> str: - """Generates a signed token storing data about an OIDC session. - - When Synapse initiates an authorization flow, it creates a random state - and a random nonce. Those parameters are given to the provider and - should be verified when the client comes back from the provider. - It is also used to store the client_redirect_url, which is used to - complete the SSO login flow. - - Args: - state: The ``state`` parameter passed to the OIDC provider. - session_data: data to include in the session token. - duration_in_ms: An optional duration for the token in milliseconds. - Defaults to an hour. - - Returns: - A signed macaroon token with the session information. - """ - macaroon = pymacaroons.Macaroon( - location=self._server_name, - identifier="key", - key=self._macaroon_secret_key, - ) - macaroon.add_first_party_caveat("gen = 1") - macaroon.add_first_party_caveat("type = session") - macaroon.add_first_party_caveat("state = %s" % (state,)) - macaroon.add_first_party_caveat("idp_id = %s" % (session_data.idp_id,)) - macaroon.add_first_party_caveat("nonce = %s" % (session_data.nonce,)) - macaroon.add_first_party_caveat( - "client_redirect_url = %s" % (session_data.client_redirect_url,) - ) - macaroon.add_first_party_caveat( - "ui_auth_session_id = %s" % (session_data.ui_auth_session_id,) - ) - now = self._clock.time_msec() - expiry = now + duration_in_ms - macaroon.add_first_party_caveat("time < %d" % (expiry,)) - - return macaroon.serialize() - - def verify_oidc_session_token( - self, session: bytes, state: str - ) -> "OidcSessionData": - """Verifies and extract an OIDC session token. - - This verifies that a given session token was issued by this homeserver - and extract the nonce and client_redirect_url caveats. - - Args: - session: The session token to verify - state: The state the OIDC provider gave back - - Returns: - The data extracted from the session cookie - - Raises: - KeyError if an expected caveat is missing from the macaroon. - """ - macaroon = pymacaroons.Macaroon.deserialize(session) - - v = pymacaroons.Verifier() - v.satisfy_exact("gen = 1") - v.satisfy_exact("type = session") - v.satisfy_exact("state = %s" % (state,)) - v.satisfy_general(lambda c: c.startswith("nonce = ")) - v.satisfy_general(lambda c: c.startswith("idp_id = ")) - v.satisfy_general(lambda c: c.startswith("client_redirect_url = ")) - v.satisfy_general(lambda c: c.startswith("ui_auth_session_id = ")) - satisfy_expiry(v, self._clock.time_msec) - - v.verify(macaroon, self._macaroon_secret_key) - - # Extract the session data from the token. - nonce = get_value_from_macaroon(macaroon, "nonce") - idp_id = get_value_from_macaroon(macaroon, "idp_id") - client_redirect_url = get_value_from_macaroon(macaroon, "client_redirect_url") - ui_auth_session_id = get_value_from_macaroon(macaroon, "ui_auth_session_id") - return OidcSessionData( - nonce=nonce, - idp_id=idp_id, - client_redirect_url=client_redirect_url, - ui_auth_session_id=ui_auth_session_id, - ) - - -@attr.s(frozen=True, slots=True) -class OidcSessionData: - """The attributes which are stored in a OIDC session cookie""" - - # the Identity Provider being used - idp_id = attr.ib(type=str) - - # The `nonce` parameter passed to the OIDC provider. - nonce = attr.ib(type=str) - - # The URL the client gave when it initiated the flow. ("" if this is a UI Auth) - client_redirect_url = attr.ib(type=str) - - # The session ID of the ongoing UI Auth ("" if this is a login) - ui_auth_session_id = attr.ib(type=str) - - -UserAttributeDict = TypedDict( - "UserAttributeDict", - {"localpart": Optional[str], "display_name": Optional[str], "emails": List[str]}, -) -C = TypeVar("C") - - -class OidcMappingProvider(Generic[C]): - """A mapping provider maps a UserInfo object to user attributes. - - It should provide the API described by this class. - """ - - def __init__(self, config: C): - """ - Args: - config: A custom config object from this module, parsed by ``parse_config()`` - """ - - @staticmethod - def parse_config(config: dict) -> C: - """Parse the dict provided by the homeserver's config - - Args: - config: A dictionary containing configuration options for this provider - - Returns: - A custom config object for this module - """ - raise NotImplementedError() - - def get_remote_user_id(self, userinfo: UserInfo) -> str: - """Get a unique user ID for this user. - - Usually, in an OIDC-compliant scenario, it should be the ``sub`` claim from the UserInfo object. - - Args: - userinfo: An object representing the user given by the OIDC provider - - Returns: - A unique user ID - """ - raise NotImplementedError() - - async def map_user_attributes( - self, userinfo: UserInfo, token: Token, failures: int - ) -> UserAttributeDict: - """Map a `UserInfo` object into user attributes. - - Args: - userinfo: An object representing the user given by the OIDC provider - token: A dict with the tokens returned by the provider - failures: How many times a call to this function with this - UserInfo has resulted in a failure. - - Returns: - A dict containing the ``localpart`` and (optionally) the ``display_name`` - """ - raise NotImplementedError() - - async def get_extra_attributes(self, userinfo: UserInfo, token: Token) -> JsonDict: - """Map a `UserInfo` object into additional attributes passed to the client during login. - - Args: - userinfo: An object representing the user given by the OIDC provider - token: A dict with the tokens returned by the provider - - Returns: - A dict containing additional attributes. Must be JSON serializable. - """ - return {} - - -# Used to clear out "None" values in templates -def jinja_finalize(thing): - return thing if thing is not None else "" - - -env = Environment(finalize=jinja_finalize) - - -@attr.s(slots=True, frozen=True) -class JinjaOidcMappingConfig: - subject_claim = attr.ib(type=str) - localpart_template = attr.ib(type=Optional[Template]) - display_name_template = attr.ib(type=Optional[Template]) - email_template = attr.ib(type=Optional[Template]) - extra_attributes = attr.ib(type=Dict[str, Template]) - - -class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]): - """An implementation of a mapping provider based on Jinja templates. - - This is the default mapping provider. - """ - - def __init__(self, config: JinjaOidcMappingConfig): - self._config = config - - @staticmethod - def parse_config(config: dict) -> JinjaOidcMappingConfig: - subject_claim = config.get("subject_claim", "sub") - - def parse_template_config(option_name: str) -> Optional[Template]: - if option_name not in config: - return None - try: - return env.from_string(config[option_name]) - except Exception as e: - raise ConfigError("invalid jinja template", path=[option_name]) from e - - localpart_template = parse_template_config("localpart_template") - display_name_template = parse_template_config("display_name_template") - email_template = parse_template_config("email_template") - - extra_attributes = {} # type Dict[str, Template] - if "extra_attributes" in config: - extra_attributes_config = config.get("extra_attributes") or {} - if not isinstance(extra_attributes_config, dict): - raise ConfigError("must be a dict", path=["extra_attributes"]) - - for key, value in extra_attributes_config.items(): - try: - extra_attributes[key] = env.from_string(value) - except Exception as e: - raise ConfigError( - "invalid jinja template", path=["extra_attributes", key] - ) from e - - return JinjaOidcMappingConfig( - subject_claim=subject_claim, - localpart_template=localpart_template, - display_name_template=display_name_template, - email_template=email_template, - extra_attributes=extra_attributes, - ) - - def get_remote_user_id(self, userinfo: UserInfo) -> str: - return userinfo[self._config.subject_claim] - - async def map_user_attributes( - self, userinfo: UserInfo, token: Token, failures: int - ) -> UserAttributeDict: - localpart = None - - if self._config.localpart_template: - localpart = self._config.localpart_template.render(user=userinfo).strip() - - # Ensure only valid characters are included in the MXID. - localpart = map_username_to_mxid_localpart(localpart) - - # Append suffix integer if last call to this function failed to produce - # a usable mxid. - localpart += str(failures) if failures else "" - - def render_template_field(template: Optional[Template]) -> Optional[str]: - if template is None: - return None - return template.render(user=userinfo).strip() - - display_name = render_template_field(self._config.display_name_template) - if display_name == "": - display_name = None - - emails = [] # type: List[str] - email = render_template_field(self._config.email_template) - if email: - emails.append(email) - - return UserAttributeDict( - localpart=localpart, display_name=display_name, emails=emails - ) - - async def get_extra_attributes(self, userinfo: UserInfo, token: Token) -> JsonDict: - extras = {} # type: Dict[str, str] - for key, template in self._config.extra_attributes.items(): - try: - extras[key] = template.render(user=userinfo).strip() - except Exception as e: - # Log an error and skip this value (don't break login for this). - logger.error("Failed to render OIDC extra attribute %s: %s" % (key, e)) - return extras diff --git a/synapse/handlers/saml.py b/synapse/handlers/saml.py new file mode 100644 index 0000000000..80ba65b9e0 --- /dev/null +++ b/synapse/handlers/saml.py @@ -0,0 +1,517 @@ +# Copyright 2019 The Matrix.org Foundation C.I.C. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import logging +import re +from typing import TYPE_CHECKING, Callable, Dict, Optional, Set, Tuple + +import attr +import saml2 +import saml2.response +from saml2.client import Saml2Client + +from synapse.api.errors import SynapseError +from synapse.config import ConfigError +from synapse.handlers._base import BaseHandler +from synapse.handlers.sso import MappingException, UserAttributes +from synapse.http.servlet import parse_string +from synapse.http.site import SynapseRequest +from synapse.module_api import ModuleApi +from synapse.types import ( + UserID, + map_username_to_mxid_localpart, + mxid_localpart_allowed_characters, +) +from synapse.util.iterutils import chunk_seq + +if TYPE_CHECKING: + from synapse.server import HomeServer + +logger = logging.getLogger(__name__) + + +@attr.s(slots=True) +class Saml2SessionData: + """Data we track about SAML2 sessions""" + + # time the session was created, in milliseconds + creation_time = attr.ib() + # The user interactive authentication session ID associated with this SAML + # session (or None if this SAML session is for an initial login). + ui_auth_session_id = attr.ib(type=Optional[str], default=None) + + +class SamlHandler(BaseHandler): + def __init__(self, hs: "HomeServer"): + super().__init__(hs) + self._saml_client = Saml2Client(hs.config.saml2_sp_config) + self._saml_idp_entityid = hs.config.saml2_idp_entityid + + self._saml2_session_lifetime = hs.config.saml2_session_lifetime + self._grandfathered_mxid_source_attribute = ( + hs.config.saml2_grandfathered_mxid_source_attribute + ) + self._saml2_attribute_requirements = hs.config.saml2.attribute_requirements + self._error_template = hs.config.sso_error_template + + # plugin to do custom mapping from saml response to mxid + self._user_mapping_provider = hs.config.saml2_user_mapping_provider_class( + hs.config.saml2_user_mapping_provider_config, + ModuleApi(hs, hs.get_auth_handler()), + ) + + # identifier for the external_ids table + self.idp_id = "saml" + + # user-facing name of this auth provider + self.idp_name = "SAML" + + # we do not currently support icons/brands for SAML auth, but this is required by + # the SsoIdentityProvider protocol type. + self.idp_icon = None + self.idp_brand = None + self.unstable_idp_brand = None + + # a map from saml session id to Saml2SessionData object + self._outstanding_requests_dict = {} # type: Dict[str, Saml2SessionData] + + self._sso_handler = hs.get_sso_handler() + self._sso_handler.register_identity_provider(self) + + async def handle_redirect_request( + self, + request: SynapseRequest, + client_redirect_url: Optional[bytes], + ui_auth_session_id: Optional[str] = None, + ) -> str: + """Handle an incoming request to /login/sso/redirect + + Args: + request: the incoming HTTP request + client_redirect_url: the URL that we should redirect the + client to after login (or None for UI Auth). + ui_auth_session_id: The session ID of the ongoing UI Auth (or + None if this is a login). + + Returns: + URL to redirect to + """ + if not client_redirect_url: + # Some SAML identity providers (e.g. Google) require a + # RelayState parameter on requests, so pass in a dummy redirect URL + # (which will never get used). + client_redirect_url = b"unused" + + reqid, info = self._saml_client.prepare_for_authenticate( + entityid=self._saml_idp_entityid, relay_state=client_redirect_url + ) + + # Since SAML sessions timeout it is useful to log when they were created. + logger.info("Initiating a new SAML session: %s" % (reqid,)) + + now = self.clock.time_msec() + self._outstanding_requests_dict[reqid] = Saml2SessionData( + creation_time=now, + ui_auth_session_id=ui_auth_session_id, + ) + + for key, value in info["headers"]: + if key == "Location": + return value + + # this shouldn't happen! + raise Exception("prepare_for_authenticate didn't return a Location header") + + async def handle_saml_response(self, request: SynapseRequest) -> None: + """Handle an incoming request to /_synapse/client/saml2/authn_response + + Args: + request: the incoming request from the browser. We'll + respond to it with a redirect. + + Returns: + Completes once we have handled the request. + """ + resp_bytes = parse_string(request, "SAMLResponse", required=True) + relay_state = parse_string(request, "RelayState", required=True) + + # expire outstanding sessions before parse_authn_request_response checks + # the dict. + self.expire_sessions() + + try: + saml2_auth = self._saml_client.parse_authn_request_response( + resp_bytes, + saml2.BINDING_HTTP_POST, + outstanding=self._outstanding_requests_dict, + ) + except saml2.response.UnsolicitedResponse as e: + # the pysaml2 library helpfully logs an ERROR here, but neglects to log + # the session ID. I don't really want to put the full text of the exception + # in the (user-visible) exception message, so let's log the exception here + # so we can track down the session IDs later. + logger.warning(str(e)) + self._sso_handler.render_error( + request, "unsolicited_response", "Unexpected SAML2 login." + ) + return + except Exception as e: + self._sso_handler.render_error( + request, + "invalid_response", + "Unable to parse SAML2 response: %s." % (e,), + ) + return + + if saml2_auth.not_signed: + self._sso_handler.render_error( + request, "unsigned_respond", "SAML2 response was not signed." + ) + return + + logger.debug("SAML2 response: %s", saml2_auth.origxml) + + await self._handle_authn_response(request, saml2_auth, relay_state) + + async def _handle_authn_response( + self, + request: SynapseRequest, + saml2_auth: saml2.response.AuthnResponse, + relay_state: str, + ) -> None: + """Handle an AuthnResponse, having parsed it from the request params + + Assumes that the signature on the response object has been checked. Maps + the user onto an MXID, registering them if necessary, and returns a response + to the browser. + + Args: + request: the incoming request from the browser. We'll respond to it with an + HTML page or a redirect + saml2_auth: the parsed AuthnResponse object + relay_state: the RelayState query param, which encodes the URI to rediret + back to + """ + + for assertion in saml2_auth.assertions: + # kibana limits the length of a log field, whereas this is all rather + # useful, so split it up. + count = 0 + for part in chunk_seq(str(assertion), 10000): + logger.info( + "SAML2 assertion: %s%s", "(%i)..." % (count,) if count else "", part + ) + count += 1 + + logger.info("SAML2 mapped attributes: %s", saml2_auth.ava) + + current_session = self._outstanding_requests_dict.pop( + saml2_auth.in_response_to, None + ) + + # first check if we're doing a UIA + if current_session and current_session.ui_auth_session_id: + try: + remote_user_id = self._remote_id_from_saml_response(saml2_auth, None) + except MappingException as e: + logger.exception("Failed to extract remote user id from SAML response") + self._sso_handler.render_error(request, "mapping_error", str(e)) + return + + return await self._sso_handler.complete_sso_ui_auth_request( + self.idp_id, + remote_user_id, + current_session.ui_auth_session_id, + request, + ) + + # otherwise, we're handling a login request. + + # Ensure that the attributes of the logged in user meet the required + # attributes. + if not self._sso_handler.check_required_attributes( + request, saml2_auth.ava, self._saml2_attribute_requirements + ): + return + + # Call the mapper to register/login the user + try: + await self._complete_saml_login(saml2_auth, request, relay_state) + except MappingException as e: + logger.exception("Could not map user") + self._sso_handler.render_error(request, "mapping_error", str(e)) + + async def _complete_saml_login( + self, + saml2_auth: saml2.response.AuthnResponse, + request: SynapseRequest, + client_redirect_url: str, + ) -> None: + """ + Given a SAML response, complete the login flow + + Retrieves the remote user ID, registers the user if necessary, and serves + a redirect back to the client with a login-token. + + Args: + saml2_auth: The parsed SAML2 response. + request: The request to respond to + client_redirect_url: The redirect URL passed in by the client. + + Raises: + MappingException if there was a problem mapping the response to a user. + RedirectException: some mapping providers may raise this if they need + to redirect to an interstitial page. + """ + remote_user_id = self._remote_id_from_saml_response( + saml2_auth, client_redirect_url + ) + + async def saml_response_to_remapped_user_attributes( + failures: int, + ) -> UserAttributes: + """ + Call the mapping provider to map a SAML response to user attributes and coerce the result into the standard form. + + This is backwards compatibility for abstraction for the SSO handler. + """ + # Call the mapping provider. + result = self._user_mapping_provider.saml_response_to_user_attributes( + saml2_auth, failures, client_redirect_url + ) + # Remap some of the results. + return UserAttributes( + localpart=result.get("mxid_localpart"), + display_name=result.get("displayname"), + emails=result.get("emails", []), + ) + + async def grandfather_existing_users() -> Optional[str]: + # backwards-compatibility hack: see if there is an existing user with a + # suitable mapping from the uid + if ( + self._grandfathered_mxid_source_attribute + and self._grandfathered_mxid_source_attribute in saml2_auth.ava + ): + attrval = saml2_auth.ava[self._grandfathered_mxid_source_attribute][0] + user_id = UserID( + map_username_to_mxid_localpart(attrval), self.server_name + ).to_string() + + logger.debug( + "Looking for existing account based on mapped %s %s", + self._grandfathered_mxid_source_attribute, + user_id, + ) + + users = await self.store.get_users_by_id_case_insensitive(user_id) + if users: + registered_user_id = list(users.keys())[0] + logger.info("Grandfathering mapping to %s", registered_user_id) + return registered_user_id + + return None + + await self._sso_handler.complete_sso_login_request( + self.idp_id, + remote_user_id, + request, + client_redirect_url, + saml_response_to_remapped_user_attributes, + grandfather_existing_users, + ) + + def _remote_id_from_saml_response( + self, + saml2_auth: saml2.response.AuthnResponse, + client_redirect_url: Optional[str], + ) -> str: + """Extract the unique remote id from a SAML2 AuthnResponse + + Args: + saml2_auth: The parsed SAML2 response. + client_redirect_url: The redirect URL passed in by the client. + Returns: + remote user id + + Raises: + MappingException if there was an error extracting the user id + """ + # It's not obvious why we need to pass in the redirect URI to the mapping + # provider, but we do :/ + remote_user_id = self._user_mapping_provider.get_remote_user_id( + saml2_auth, client_redirect_url + ) + + if not remote_user_id: + raise MappingException( + "Failed to extract remote user id from SAML response" + ) + + return remote_user_id + + def expire_sessions(self): + expire_before = self.clock.time_msec() - self._saml2_session_lifetime + to_expire = set() + for reqid, data in self._outstanding_requests_dict.items(): + if data.creation_time < expire_before: + to_expire.add(reqid) + for reqid in to_expire: + logger.debug("Expiring session id %s", reqid) + del self._outstanding_requests_dict[reqid] + + +DOT_REPLACE_PATTERN = re.compile( + ("[^%s]" % (re.escape("".join(mxid_localpart_allowed_characters)),)) +) + + +def dot_replace_for_mxid(username: str) -> str: + """Replace any characters which are not allowed in Matrix IDs with a dot.""" + username = username.lower() + username = DOT_REPLACE_PATTERN.sub(".", username) + + # regular mxids aren't allowed to start with an underscore either + username = re.sub("^_", "", username) + return username + + +MXID_MAPPER_MAP = { + "hexencode": map_username_to_mxid_localpart, + "dotreplace": dot_replace_for_mxid, +} # type: Dict[str, Callable[[str], str]] + + +@attr.s +class SamlConfig: + mxid_source_attribute = attr.ib() + mxid_mapper = attr.ib() + + +class DefaultSamlMappingProvider: + __version__ = "0.0.1" + + def __init__(self, parsed_config: SamlConfig, module_api: ModuleApi): + """The default SAML user mapping provider + + Args: + parsed_config: Module configuration + module_api: module api proxy + """ + self._mxid_source_attribute = parsed_config.mxid_source_attribute + self._mxid_mapper = parsed_config.mxid_mapper + + self._grandfathered_mxid_source_attribute = ( + module_api._hs.config.saml2_grandfathered_mxid_source_attribute + ) + + def get_remote_user_id( + self, saml_response: saml2.response.AuthnResponse, client_redirect_url: str + ) -> str: + """Extracts the remote user id from the SAML response""" + try: + return saml_response.ava["uid"][0] + except KeyError: + logger.warning("SAML2 response lacks a 'uid' attestation") + raise MappingException("'uid' not in SAML2 response") + + def saml_response_to_user_attributes( + self, + saml_response: saml2.response.AuthnResponse, + failures: int, + client_redirect_url: str, + ) -> dict: + """Maps some text from a SAML response to attributes of a new user + + Args: + saml_response: A SAML auth response object + + failures: How many times a call to this function with this + saml_response has resulted in a failure + + client_redirect_url: where the client wants to redirect to + + Returns: + dict: A dict containing new user attributes. Possible keys: + * mxid_localpart (str): Required. The localpart of the user's mxid + * displayname (str): The displayname of the user + * emails (list[str]): Any emails for the user + """ + try: + mxid_source = saml_response.ava[self._mxid_source_attribute][0] + except KeyError: + logger.warning( + "SAML2 response lacks a '%s' attestation", + self._mxid_source_attribute, + ) + raise SynapseError( + 400, "%s not in SAML2 response" % (self._mxid_source_attribute,) + ) + + # Use the configured mapper for this mxid_source + localpart = self._mxid_mapper(mxid_source) + + # Append suffix integer if last call to this function failed to produce + # a usable mxid. + localpart += str(failures) if failures else "" + + # Retrieve the display name from the saml response + # If displayname is None, the mxid_localpart will be used instead + displayname = saml_response.ava.get("displayName", [None])[0] + + # Retrieve any emails present in the saml response + emails = saml_response.ava.get("email", []) + + return { + "mxid_localpart": localpart, + "displayname": displayname, + "emails": emails, + } + + @staticmethod + def parse_config(config: dict) -> SamlConfig: + """Parse the dict provided by the homeserver's config + Args: + config: A dictionary containing configuration options for this provider + Returns: + SamlConfig: A custom config object for this module + """ + # Parse config options and use defaults where necessary + mxid_source_attribute = config.get("mxid_source_attribute", "uid") + mapping_type = config.get("mxid_mapping", "hexencode") + + # Retrieve the associating mapping function + try: + mxid_mapper = MXID_MAPPER_MAP[mapping_type] + except KeyError: + raise ConfigError( + "saml2_config.user_mapping_provider.config: '%s' is not a valid " + "mxid_mapping value" % (mapping_type,) + ) + + return SamlConfig(mxid_source_attribute, mxid_mapper) + + @staticmethod + def get_saml_attributes(config: SamlConfig) -> Tuple[Set[str], Set[str]]: + """Returns the required attributes of a SAML + + Args: + config: A SamlConfig object containing configuration params for this provider + + Returns: + The first set equates to the saml auth response + attributes that are required for the module to function, whereas the + second set consists of those attributes which can be used if + available, but are not necessary + """ + return {"uid", config.mxid_source_attribute}, {"displayName", "email"} diff --git a/synapse/handlers/saml_handler.py b/synapse/handlers/saml_handler.py deleted file mode 100644 index 80ba65b9e0..0000000000 --- a/synapse/handlers/saml_handler.py +++ /dev/null @@ -1,517 +0,0 @@ -# Copyright 2019 The Matrix.org Foundation C.I.C. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import logging -import re -from typing import TYPE_CHECKING, Callable, Dict, Optional, Set, Tuple - -import attr -import saml2 -import saml2.response -from saml2.client import Saml2Client - -from synapse.api.errors import SynapseError -from synapse.config import ConfigError -from synapse.handlers._base import BaseHandler -from synapse.handlers.sso import MappingException, UserAttributes -from synapse.http.servlet import parse_string -from synapse.http.site import SynapseRequest -from synapse.module_api import ModuleApi -from synapse.types import ( - UserID, - map_username_to_mxid_localpart, - mxid_localpart_allowed_characters, -) -from synapse.util.iterutils import chunk_seq - -if TYPE_CHECKING: - from synapse.server import HomeServer - -logger = logging.getLogger(__name__) - - -@attr.s(slots=True) -class Saml2SessionData: - """Data we track about SAML2 sessions""" - - # time the session was created, in milliseconds - creation_time = attr.ib() - # The user interactive authentication session ID associated with this SAML - # session (or None if this SAML session is for an initial login). - ui_auth_session_id = attr.ib(type=Optional[str], default=None) - - -class SamlHandler(BaseHandler): - def __init__(self, hs: "HomeServer"): - super().__init__(hs) - self._saml_client = Saml2Client(hs.config.saml2_sp_config) - self._saml_idp_entityid = hs.config.saml2_idp_entityid - - self._saml2_session_lifetime = hs.config.saml2_session_lifetime - self._grandfathered_mxid_source_attribute = ( - hs.config.saml2_grandfathered_mxid_source_attribute - ) - self._saml2_attribute_requirements = hs.config.saml2.attribute_requirements - self._error_template = hs.config.sso_error_template - - # plugin to do custom mapping from saml response to mxid - self._user_mapping_provider = hs.config.saml2_user_mapping_provider_class( - hs.config.saml2_user_mapping_provider_config, - ModuleApi(hs, hs.get_auth_handler()), - ) - - # identifier for the external_ids table - self.idp_id = "saml" - - # user-facing name of this auth provider - self.idp_name = "SAML" - - # we do not currently support icons/brands for SAML auth, but this is required by - # the SsoIdentityProvider protocol type. - self.idp_icon = None - self.idp_brand = None - self.unstable_idp_brand = None - - # a map from saml session id to Saml2SessionData object - self._outstanding_requests_dict = {} # type: Dict[str, Saml2SessionData] - - self._sso_handler = hs.get_sso_handler() - self._sso_handler.register_identity_provider(self) - - async def handle_redirect_request( - self, - request: SynapseRequest, - client_redirect_url: Optional[bytes], - ui_auth_session_id: Optional[str] = None, - ) -> str: - """Handle an incoming request to /login/sso/redirect - - Args: - request: the incoming HTTP request - client_redirect_url: the URL that we should redirect the - client to after login (or None for UI Auth). - ui_auth_session_id: The session ID of the ongoing UI Auth (or - None if this is a login). - - Returns: - URL to redirect to - """ - if not client_redirect_url: - # Some SAML identity providers (e.g. Google) require a - # RelayState parameter on requests, so pass in a dummy redirect URL - # (which will never get used). - client_redirect_url = b"unused" - - reqid, info = self._saml_client.prepare_for_authenticate( - entityid=self._saml_idp_entityid, relay_state=client_redirect_url - ) - - # Since SAML sessions timeout it is useful to log when they were created. - logger.info("Initiating a new SAML session: %s" % (reqid,)) - - now = self.clock.time_msec() - self._outstanding_requests_dict[reqid] = Saml2SessionData( - creation_time=now, - ui_auth_session_id=ui_auth_session_id, - ) - - for key, value in info["headers"]: - if key == "Location": - return value - - # this shouldn't happen! - raise Exception("prepare_for_authenticate didn't return a Location header") - - async def handle_saml_response(self, request: SynapseRequest) -> None: - """Handle an incoming request to /_synapse/client/saml2/authn_response - - Args: - request: the incoming request from the browser. We'll - respond to it with a redirect. - - Returns: - Completes once we have handled the request. - """ - resp_bytes = parse_string(request, "SAMLResponse", required=True) - relay_state = parse_string(request, "RelayState", required=True) - - # expire outstanding sessions before parse_authn_request_response checks - # the dict. - self.expire_sessions() - - try: - saml2_auth = self._saml_client.parse_authn_request_response( - resp_bytes, - saml2.BINDING_HTTP_POST, - outstanding=self._outstanding_requests_dict, - ) - except saml2.response.UnsolicitedResponse as e: - # the pysaml2 library helpfully logs an ERROR here, but neglects to log - # the session ID. I don't really want to put the full text of the exception - # in the (user-visible) exception message, so let's log the exception here - # so we can track down the session IDs later. - logger.warning(str(e)) - self._sso_handler.render_error( - request, "unsolicited_response", "Unexpected SAML2 login." - ) - return - except Exception as e: - self._sso_handler.render_error( - request, - "invalid_response", - "Unable to parse SAML2 response: %s." % (e,), - ) - return - - if saml2_auth.not_signed: - self._sso_handler.render_error( - request, "unsigned_respond", "SAML2 response was not signed." - ) - return - - logger.debug("SAML2 response: %s", saml2_auth.origxml) - - await self._handle_authn_response(request, saml2_auth, relay_state) - - async def _handle_authn_response( - self, - request: SynapseRequest, - saml2_auth: saml2.response.AuthnResponse, - relay_state: str, - ) -> None: - """Handle an AuthnResponse, having parsed it from the request params - - Assumes that the signature on the response object has been checked. Maps - the user onto an MXID, registering them if necessary, and returns a response - to the browser. - - Args: - request: the incoming request from the browser. We'll respond to it with an - HTML page or a redirect - saml2_auth: the parsed AuthnResponse object - relay_state: the RelayState query param, which encodes the URI to rediret - back to - """ - - for assertion in saml2_auth.assertions: - # kibana limits the length of a log field, whereas this is all rather - # useful, so split it up. - count = 0 - for part in chunk_seq(str(assertion), 10000): - logger.info( - "SAML2 assertion: %s%s", "(%i)..." % (count,) if count else "", part - ) - count += 1 - - logger.info("SAML2 mapped attributes: %s", saml2_auth.ava) - - current_session = self._outstanding_requests_dict.pop( - saml2_auth.in_response_to, None - ) - - # first check if we're doing a UIA - if current_session and current_session.ui_auth_session_id: - try: - remote_user_id = self._remote_id_from_saml_response(saml2_auth, None) - except MappingException as e: - logger.exception("Failed to extract remote user id from SAML response") - self._sso_handler.render_error(request, "mapping_error", str(e)) - return - - return await self._sso_handler.complete_sso_ui_auth_request( - self.idp_id, - remote_user_id, - current_session.ui_auth_session_id, - request, - ) - - # otherwise, we're handling a login request. - - # Ensure that the attributes of the logged in user meet the required - # attributes. - if not self._sso_handler.check_required_attributes( - request, saml2_auth.ava, self._saml2_attribute_requirements - ): - return - - # Call the mapper to register/login the user - try: - await self._complete_saml_login(saml2_auth, request, relay_state) - except MappingException as e: - logger.exception("Could not map user") - self._sso_handler.render_error(request, "mapping_error", str(e)) - - async def _complete_saml_login( - self, - saml2_auth: saml2.response.AuthnResponse, - request: SynapseRequest, - client_redirect_url: str, - ) -> None: - """ - Given a SAML response, complete the login flow - - Retrieves the remote user ID, registers the user if necessary, and serves - a redirect back to the client with a login-token. - - Args: - saml2_auth: The parsed SAML2 response. - request: The request to respond to - client_redirect_url: The redirect URL passed in by the client. - - Raises: - MappingException if there was a problem mapping the response to a user. - RedirectException: some mapping providers may raise this if they need - to redirect to an interstitial page. - """ - remote_user_id = self._remote_id_from_saml_response( - saml2_auth, client_redirect_url - ) - - async def saml_response_to_remapped_user_attributes( - failures: int, - ) -> UserAttributes: - """ - Call the mapping provider to map a SAML response to user attributes and coerce the result into the standard form. - - This is backwards compatibility for abstraction for the SSO handler. - """ - # Call the mapping provider. - result = self._user_mapping_provider.saml_response_to_user_attributes( - saml2_auth, failures, client_redirect_url - ) - # Remap some of the results. - return UserAttributes( - localpart=result.get("mxid_localpart"), - display_name=result.get("displayname"), - emails=result.get("emails", []), - ) - - async def grandfather_existing_users() -> Optional[str]: - # backwards-compatibility hack: see if there is an existing user with a - # suitable mapping from the uid - if ( - self._grandfathered_mxid_source_attribute - and self._grandfathered_mxid_source_attribute in saml2_auth.ava - ): - attrval = saml2_auth.ava[self._grandfathered_mxid_source_attribute][0] - user_id = UserID( - map_username_to_mxid_localpart(attrval), self.server_name - ).to_string() - - logger.debug( - "Looking for existing account based on mapped %s %s", - self._grandfathered_mxid_source_attribute, - user_id, - ) - - users = await self.store.get_users_by_id_case_insensitive(user_id) - if users: - registered_user_id = list(users.keys())[0] - logger.info("Grandfathering mapping to %s", registered_user_id) - return registered_user_id - - return None - - await self._sso_handler.complete_sso_login_request( - self.idp_id, - remote_user_id, - request, - client_redirect_url, - saml_response_to_remapped_user_attributes, - grandfather_existing_users, - ) - - def _remote_id_from_saml_response( - self, - saml2_auth: saml2.response.AuthnResponse, - client_redirect_url: Optional[str], - ) -> str: - """Extract the unique remote id from a SAML2 AuthnResponse - - Args: - saml2_auth: The parsed SAML2 response. - client_redirect_url: The redirect URL passed in by the client. - Returns: - remote user id - - Raises: - MappingException if there was an error extracting the user id - """ - # It's not obvious why we need to pass in the redirect URI to the mapping - # provider, but we do :/ - remote_user_id = self._user_mapping_provider.get_remote_user_id( - saml2_auth, client_redirect_url - ) - - if not remote_user_id: - raise MappingException( - "Failed to extract remote user id from SAML response" - ) - - return remote_user_id - - def expire_sessions(self): - expire_before = self.clock.time_msec() - self._saml2_session_lifetime - to_expire = set() - for reqid, data in self._outstanding_requests_dict.items(): - if data.creation_time < expire_before: - to_expire.add(reqid) - for reqid in to_expire: - logger.debug("Expiring session id %s", reqid) - del self._outstanding_requests_dict[reqid] - - -DOT_REPLACE_PATTERN = re.compile( - ("[^%s]" % (re.escape("".join(mxid_localpart_allowed_characters)),)) -) - - -def dot_replace_for_mxid(username: str) -> str: - """Replace any characters which are not allowed in Matrix IDs with a dot.""" - username = username.lower() - username = DOT_REPLACE_PATTERN.sub(".", username) - - # regular mxids aren't allowed to start with an underscore either - username = re.sub("^_", "", username) - return username - - -MXID_MAPPER_MAP = { - "hexencode": map_username_to_mxid_localpart, - "dotreplace": dot_replace_for_mxid, -} # type: Dict[str, Callable[[str], str]] - - -@attr.s -class SamlConfig: - mxid_source_attribute = attr.ib() - mxid_mapper = attr.ib() - - -class DefaultSamlMappingProvider: - __version__ = "0.0.1" - - def __init__(self, parsed_config: SamlConfig, module_api: ModuleApi): - """The default SAML user mapping provider - - Args: - parsed_config: Module configuration - module_api: module api proxy - """ - self._mxid_source_attribute = parsed_config.mxid_source_attribute - self._mxid_mapper = parsed_config.mxid_mapper - - self._grandfathered_mxid_source_attribute = ( - module_api._hs.config.saml2_grandfathered_mxid_source_attribute - ) - - def get_remote_user_id( - self, saml_response: saml2.response.AuthnResponse, client_redirect_url: str - ) -> str: - """Extracts the remote user id from the SAML response""" - try: - return saml_response.ava["uid"][0] - except KeyError: - logger.warning("SAML2 response lacks a 'uid' attestation") - raise MappingException("'uid' not in SAML2 response") - - def saml_response_to_user_attributes( - self, - saml_response: saml2.response.AuthnResponse, - failures: int, - client_redirect_url: str, - ) -> dict: - """Maps some text from a SAML response to attributes of a new user - - Args: - saml_response: A SAML auth response object - - failures: How many times a call to this function with this - saml_response has resulted in a failure - - client_redirect_url: where the client wants to redirect to - - Returns: - dict: A dict containing new user attributes. Possible keys: - * mxid_localpart (str): Required. The localpart of the user's mxid - * displayname (str): The displayname of the user - * emails (list[str]): Any emails for the user - """ - try: - mxid_source = saml_response.ava[self._mxid_source_attribute][0] - except KeyError: - logger.warning( - "SAML2 response lacks a '%s' attestation", - self._mxid_source_attribute, - ) - raise SynapseError( - 400, "%s not in SAML2 response" % (self._mxid_source_attribute,) - ) - - # Use the configured mapper for this mxid_source - localpart = self._mxid_mapper(mxid_source) - - # Append suffix integer if last call to this function failed to produce - # a usable mxid. - localpart += str(failures) if failures else "" - - # Retrieve the display name from the saml response - # If displayname is None, the mxid_localpart will be used instead - displayname = saml_response.ava.get("displayName", [None])[0] - - # Retrieve any emails present in the saml response - emails = saml_response.ava.get("email", []) - - return { - "mxid_localpart": localpart, - "displayname": displayname, - "emails": emails, - } - - @staticmethod - def parse_config(config: dict) -> SamlConfig: - """Parse the dict provided by the homeserver's config - Args: - config: A dictionary containing configuration options for this provider - Returns: - SamlConfig: A custom config object for this module - """ - # Parse config options and use defaults where necessary - mxid_source_attribute = config.get("mxid_source_attribute", "uid") - mapping_type = config.get("mxid_mapping", "hexencode") - - # Retrieve the associating mapping function - try: - mxid_mapper = MXID_MAPPER_MAP[mapping_type] - except KeyError: - raise ConfigError( - "saml2_config.user_mapping_provider.config: '%s' is not a valid " - "mxid_mapping value" % (mapping_type,) - ) - - return SamlConfig(mxid_source_attribute, mxid_mapper) - - @staticmethod - def get_saml_attributes(config: SamlConfig) -> Tuple[Set[str], Set[str]]: - """Returns the required attributes of a SAML - - Args: - config: A SamlConfig object containing configuration params for this provider - - Returns: - The first set equates to the saml auth response - attributes that are required for the module to function, whereas the - second set consists of those attributes which can be used if - available, but are not necessary - """ - return {"uid", config.mxid_source_attribute}, {"displayName", "email"} diff --git a/synapse/rest/client/v2_alpha/register.py b/synapse/rest/client/v2_alpha/register.py index b26aad7b34..c5a6800b8a 100644 --- a/synapse/rest/client/v2_alpha/register.py +++ b/synapse/rest/client/v2_alpha/register.py @@ -30,7 +30,7 @@ from synapse.api.errors import ( ) from synapse.config import ConfigError from synapse.config.captcha import CaptchaConfig -from synapse.config.consent_config import ConsentConfig +from synapse.config.consent import ConsentConfig from synapse.config.emailconfig import ThreepidBehaviour from synapse.config.ratelimiting import FederationRateLimitConfig from synapse.config.registration import RegistrationConfig diff --git a/synapse/server.py b/synapse/server.py index 42d2fad8e8..59ae91b503 100644 --- a/synapse/server.py +++ b/synapse/server.py @@ -70,7 +70,7 @@ from synapse.handlers.acme import AcmeHandler from synapse.handlers.admin import AdminHandler from synapse.handlers.appservice import ApplicationServicesHandler from synapse.handlers.auth import AuthHandler, MacaroonGenerator -from synapse.handlers.cas_handler import CasHandler +from synapse.handlers.cas import CasHandler from synapse.handlers.deactivate_account import DeactivateAccountHandler from synapse.handlers.device import DeviceHandler, DeviceWorkerHandler from synapse.handlers.devicemessage import DeviceMessageHandler @@ -145,8 +145,8 @@ logger = logging.getLogger(__name__) if TYPE_CHECKING: from txredisapi import RedisProtocol - from synapse.handlers.oidc_handler import OidcHandler - from synapse.handlers.saml_handler import SamlHandler + from synapse.handlers.oidc import OidcHandler + from synapse.handlers.saml import SamlHandler T = TypeVar("T", bound=Callable[..., Any]) @@ -696,13 +696,13 @@ class HomeServer(metaclass=abc.ABCMeta): @cache_in_self def get_saml_handler(self) -> "SamlHandler": - from synapse.handlers.saml_handler import SamlHandler + from synapse.handlers.saml import SamlHandler return SamlHandler(self) @cache_in_self def get_oidc_handler(self) -> "OidcHandler": - from synapse.handlers.oidc_handler import OidcHandler + from synapse.handlers.oidc import OidcHandler return OidcHandler(self) diff --git a/tests/handlers/test_cas.py b/tests/handlers/test_cas.py index 0444b26798..b625995d12 100644 --- a/tests/handlers/test_cas.py +++ b/tests/handlers/test_cas.py @@ -13,7 +13,7 @@ # limitations under the License. from unittest.mock import Mock -from synapse.handlers.cas_handler import CasResponse +from synapse.handlers.cas import CasResponse from tests.test_utils import simple_async_mock from tests.unittest import HomeserverTestCase, override_config diff --git a/tests/handlers/test_oidc.py b/tests/handlers/test_oidc.py index 34d2fc1dfb..a25c89bd5b 100644 --- a/tests/handlers/test_oidc.py +++ b/tests/handlers/test_oidc.py @@ -499,7 +499,7 @@ class OidcHandlerTestCase(HomeserverTestCase): self.assertRenderedError("fetch_error") # Handle code exchange failure - from synapse.handlers.oidc_handler import OidcError + from synapse.handlers.oidc import OidcError self.provider._exchange_code = simple_async_mock( raises=OidcError("invalid_request") @@ -583,7 +583,7 @@ class OidcHandlerTestCase(HomeserverTestCase): body=b'{"error": "foo", "error_description": "bar"}', ) ) - from synapse.handlers.oidc_handler import OidcError + from synapse.handlers.oidc import OidcError exc = self.get_failure(self.provider._exchange_code(code), OidcError) self.assertEqual(exc.value.error, "foo") @@ -1126,7 +1126,7 @@ class OidcHandlerTestCase(HomeserverTestCase): client_redirect_url: str, ui_auth_session_id: str = "", ) -> str: - from synapse.handlers.oidc_handler import OidcSessionData + from synapse.handlers.oidc import OidcSessionData return self.handler._token_generator.generate_oidc_session_token( state=state, @@ -1152,7 +1152,7 @@ async def _make_callback_with_userinfo( userinfo: the OIDC userinfo dict client_redirect_url: the URL to redirect to on success. """ - from synapse.handlers.oidc_handler import OidcSessionData + from synapse.handlers.oidc import OidcSessionData handler = hs.get_oidc_handler() provider = handler._providers["oidc"] -- cgit 1.5.1 From 294c67503300b6bfa7785a5cfa55e25c1e452574 Mon Sep 17 00:00:00 2001 From: Richard van der Hoff <1389908+richvdh@users.noreply.github.com> Date: Thu, 22 Apr 2021 16:43:50 +0100 Subject: Remove `synapse.types.Collection` (#9856) This is no longer required, since we have dropped support for Python 3.5. --- changelog.d/9856.misc | 1 + synapse/config/oidc.py | 4 ++-- synapse/events/spamcheck.py | 3 +-- synapse/federation/sender/__init__.py | 14 ++++++++++++-- synapse/handlers/appservice.py | 4 ++-- synapse/handlers/device.py | 3 +-- synapse/handlers/presence.py | 3 ++- synapse/handlers/sso.py | 3 ++- synapse/handlers/sync.py | 13 +++++++++++-- synapse/notifier.py | 9 ++------- synapse/replication/tcp/protocol.py | 3 +-- synapse/state/__init__.py | 3 ++- synapse/state/v2.py | 3 ++- synapse/storage/_base.py | 4 ++-- synapse/storage/database.py | 2 +- synapse/storage/databases/main/devices.py | 4 ++-- synapse/storage/databases/main/event_federation.py | 3 +-- synapse/storage/databases/main/events_worker.py | 13 +++++++++++-- synapse/storage/databases/main/roommember.py | 14 ++++++++++++-- synapse/storage/databases/main/search.py | 3 +-- synapse/storage/databases/main/stream.py | 4 ++-- synapse/storage/persist_events.py | 3 +-- synapse/storage/prepare_database.py | 3 +-- synapse/types.py | 14 -------------- synapse/util/caches/stream_change_cache.py | 3 +-- synapse/util/iterutils.py | 3 +-- 26 files changed, 77 insertions(+), 62 deletions(-) create mode 100644 changelog.d/9856.misc diff --git a/changelog.d/9856.misc b/changelog.d/9856.misc new file mode 100644 index 0000000000..d67e8c386a --- /dev/null +++ b/changelog.d/9856.misc @@ -0,0 +1 @@ +Remove redundant `synapse.types.Collection` type definition. diff --git a/synapse/config/oidc.py b/synapse/config/oidc.py index 72402eb81d..ea0abf5aa2 100644 --- a/synapse/config/oidc.py +++ b/synapse/config/oidc.py @@ -14,14 +14,14 @@ # limitations under the License. from collections import Counter -from typing import Iterable, List, Mapping, Optional, Tuple, Type +from typing import Collection, Iterable, List, Mapping, Optional, Tuple, Type import attr from synapse.config._util import validate_config from synapse.config.sso import SsoAttributeRequirement from synapse.python_dependencies import DependencyException, check_requirements -from synapse.types import Collection, JsonDict +from synapse.types import JsonDict from synapse.util.module_loader import load_module from synapse.util.stringutils import parse_and_validate_mxc_uri diff --git a/synapse/events/spamcheck.py b/synapse/events/spamcheck.py index c727b48c1e..7118d5f52d 100644 --- a/synapse/events/spamcheck.py +++ b/synapse/events/spamcheck.py @@ -15,12 +15,11 @@ import inspect import logging -from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union +from typing import TYPE_CHECKING, Any, Collection, Dict, List, Optional, Tuple, Union from synapse.rest.media.v1._base import FileInfo from synapse.rest.media.v1.media_storage import ReadableFileWrapper from synapse.spam_checker_api import RegistrationBehaviour -from synapse.types import Collection from synapse.util.async_helpers import maybe_awaitable if TYPE_CHECKING: diff --git a/synapse/federation/sender/__init__.py b/synapse/federation/sender/__init__.py index b00a55324c..022bbf7dad 100644 --- a/synapse/federation/sender/__init__.py +++ b/synapse/federation/sender/__init__.py @@ -14,7 +14,17 @@ import abc import logging -from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Set, Tuple +from typing import ( + TYPE_CHECKING, + Collection, + Dict, + Hashable, + Iterable, + List, + Optional, + Set, + Tuple, +) from prometheus_client import Counter @@ -31,7 +41,7 @@ from synapse.metrics import ( events_processed_counter, ) from synapse.metrics.background_process_metrics import run_as_background_process -from synapse.types import Collection, JsonDict, ReadReceipt, RoomStreamToken +from synapse.types import JsonDict, ReadReceipt, RoomStreamToken from synapse.util.metrics import Measure if TYPE_CHECKING: diff --git a/synapse/handlers/appservice.py b/synapse/handlers/appservice.py index d7bc4e23ed..177310f0be 100644 --- a/synapse/handlers/appservice.py +++ b/synapse/handlers/appservice.py @@ -12,7 +12,7 @@ # See the License for the specific language governing permissions and # limitations under the License. import logging -from typing import TYPE_CHECKING, Dict, List, Optional, Union +from typing import TYPE_CHECKING, Collection, Dict, List, Optional, Union from prometheus_client import Counter @@ -33,7 +33,7 @@ from synapse.metrics.background_process_metrics import ( wrap_as_background_process, ) from synapse.storage.databases.main.directory import RoomAliasMapping -from synapse.types import Collection, JsonDict, RoomAlias, RoomStreamToken, UserID +from synapse.types import JsonDict, RoomAlias, RoomStreamToken, UserID from synapse.util.metrics import Measure if TYPE_CHECKING: diff --git a/synapse/handlers/device.py b/synapse/handlers/device.py index c1d7800981..34d39e3b44 100644 --- a/synapse/handlers/device.py +++ b/synapse/handlers/device.py @@ -14,7 +14,7 @@ # See the License for the specific language governing permissions and # limitations under the License. import logging -from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Set, Tuple +from typing import TYPE_CHECKING, Collection, Dict, Iterable, List, Optional, Set, Tuple from synapse.api import errors from synapse.api.constants import EventTypes @@ -28,7 +28,6 @@ from synapse.api.errors import ( from synapse.logging.opentracing import log_kv, set_tag, trace from synapse.metrics.background_process_metrics import run_as_background_process from synapse.types import ( - Collection, JsonDict, StreamToken, UserID, diff --git a/synapse/handlers/presence.py b/synapse/handlers/presence.py index 598466c9bd..7fd28ffa54 100644 --- a/synapse/handlers/presence.py +++ b/synapse/handlers/presence.py @@ -28,6 +28,7 @@ from bisect import bisect from contextlib import contextmanager from typing import ( TYPE_CHECKING, + Collection, Dict, FrozenSet, Iterable, @@ -59,7 +60,7 @@ from synapse.replication.tcp.commands import ClearUserSyncsCommand from synapse.replication.tcp.streams import PresenceFederationStream, PresenceStream from synapse.state import StateHandler from synapse.storage.databases.main import DataStore -from synapse.types import Collection, JsonDict, UserID, get_domain_from_id +from synapse.types import JsonDict, UserID, get_domain_from_id from synapse.util.async_helpers import Linearizer from synapse.util.caches.descriptors import _CacheContext, cached from synapse.util.metrics import Measure diff --git a/synapse/handlers/sso.py b/synapse/handlers/sso.py index 8d00ffdc73..044ff06d84 100644 --- a/synapse/handlers/sso.py +++ b/synapse/handlers/sso.py @@ -18,6 +18,7 @@ from typing import ( Any, Awaitable, Callable, + Collection, Dict, Iterable, List, @@ -40,7 +41,7 @@ from synapse.handlers.ui_auth import UIAuthSessionDataConstants from synapse.http import get_request_user_agent from synapse.http.server import respond_with_html, respond_with_redirect from synapse.http.site import SynapseRequest -from synapse.types import Collection, JsonDict, UserID, contains_invalid_mxid_characters +from synapse.types import JsonDict, UserID, contains_invalid_mxid_characters from synapse.util.async_helpers import Linearizer from synapse.util.stringutils import random_string diff --git a/synapse/handlers/sync.py b/synapse/handlers/sync.py index dc8ee8cd17..a9a3ee05c3 100644 --- a/synapse/handlers/sync.py +++ b/synapse/handlers/sync.py @@ -14,7 +14,17 @@ # limitations under the License. import itertools import logging -from typing import TYPE_CHECKING, Any, Dict, FrozenSet, List, Optional, Set, Tuple +from typing import ( + TYPE_CHECKING, + Any, + Collection, + Dict, + FrozenSet, + List, + Optional, + Set, + Tuple, +) import attr from prometheus_client import Counter @@ -28,7 +38,6 @@ from synapse.push.clientformat import format_push_rules_for_user from synapse.storage.roommember import MemberSummary from synapse.storage.state import StateFilter from synapse.types import ( - Collection, JsonDict, MutableStateMap, Requester, diff --git a/synapse/notifier.py b/synapse/notifier.py index d5ab77058d..b9531007e2 100644 --- a/synapse/notifier.py +++ b/synapse/notifier.py @@ -17,6 +17,7 @@ from collections import namedtuple from typing import ( Awaitable, Callable, + Collection, Dict, Iterable, List, @@ -42,13 +43,7 @@ from synapse.logging.opentracing import log_kv, start_active_span from synapse.logging.utils import log_function from synapse.metrics import LaterGauge from synapse.streams.config import PaginationConfig -from synapse.types import ( - Collection, - PersistedEventPosition, - RoomStreamToken, - StreamToken, - UserID, -) +from synapse.types import PersistedEventPosition, RoomStreamToken, StreamToken, UserID from synapse.util.async_helpers import ObservableDeferred, timeout_deferred from synapse.util.metrics import Measure from synapse.visibility import filter_events_for_client diff --git a/synapse/replication/tcp/protocol.py b/synapse/replication/tcp/protocol.py index 6860576e78..6e3705364f 100644 --- a/synapse/replication/tcp/protocol.py +++ b/synapse/replication/tcp/protocol.py @@ -49,7 +49,7 @@ import fcntl import logging import struct from inspect import isawaitable -from typing import TYPE_CHECKING, List, Optional +from typing import TYPE_CHECKING, Collection, List, Optional from prometheus_client import Counter from zope.interface import Interface, implementer @@ -76,7 +76,6 @@ from synapse.replication.tcp.commands import ( ServerCommand, parse_command_from_line, ) -from synapse.types import Collection from synapse.util import Clock from synapse.util.stringutils import random_string diff --git a/synapse/state/__init__.py b/synapse/state/__init__.py index c7ee731154..b3bd92d37c 100644 --- a/synapse/state/__init__.py +++ b/synapse/state/__init__.py @@ -19,6 +19,7 @@ from typing import ( Any, Awaitable, Callable, + Collection, DefaultDict, Dict, FrozenSet, @@ -46,7 +47,7 @@ from synapse.logging.utils import log_function from synapse.state import v1, v2 from synapse.storage.databases.main.events_worker import EventRedactBehaviour from synapse.storage.roommember import ProfileInfo -from synapse.types import Collection, StateMap +from synapse.types import StateMap from synapse.util.async_helpers import Linearizer from synapse.util.caches.expiringcache import ExpiringCache from synapse.util.metrics import Measure, measure_func diff --git a/synapse/state/v2.py b/synapse/state/v2.py index 32671ddbde..008644cd98 100644 --- a/synapse/state/v2.py +++ b/synapse/state/v2.py @@ -18,6 +18,7 @@ import logging from typing import ( Any, Callable, + Collection, Dict, Generator, Iterable, @@ -37,7 +38,7 @@ from synapse.api.constants import EventTypes from synapse.api.errors import AuthError from synapse.api.room_versions import KNOWN_ROOM_VERSIONS from synapse.events import EventBase -from synapse.types import Collection, MutableStateMap, StateMap +from synapse.types import MutableStateMap, StateMap from synapse.util import Clock logger = logging.getLogger(__name__) diff --git a/synapse/storage/_base.py b/synapse/storage/_base.py index 56dd3a4861..d472676acf 100644 --- a/synapse/storage/_base.py +++ b/synapse/storage/_base.py @@ -16,13 +16,13 @@ import logging import random from abc import ABCMeta -from typing import TYPE_CHECKING, Any, Iterable, Optional, Union +from typing import TYPE_CHECKING, Any, Collection, Iterable, Optional, Union from synapse.storage.database import LoggingTransaction # noqa: F401 from synapse.storage.database import make_in_list_sql_clause # noqa: F401 from synapse.storage.database import DatabasePool from synapse.storage.types import Connection -from synapse.types import Collection, StreamToken, get_domain_from_id +from synapse.types import StreamToken, get_domain_from_id from synapse.util import json_decoder if TYPE_CHECKING: diff --git a/synapse/storage/database.py b/synapse/storage/database.py index 9a6d2b21f9..9452368bf0 100644 --- a/synapse/storage/database.py +++ b/synapse/storage/database.py @@ -20,6 +20,7 @@ from time import monotonic as monotonic_time from typing import ( Any, Callable, + Collection, Dict, Iterable, Iterator, @@ -48,7 +49,6 @@ from synapse.metrics.background_process_metrics import run_as_background_process from synapse.storage.background_updates import BackgroundUpdater from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine, Sqlite3Engine from synapse.storage.types import Connection, Cursor -from synapse.types import Collection # python 3 does not have a maximum int value MAX_TXN_ID = 2 ** 63 - 1 diff --git a/synapse/storage/databases/main/devices.py b/synapse/storage/databases/main/devices.py index b204875580..9be713399f 100644 --- a/synapse/storage/databases/main/devices.py +++ b/synapse/storage/databases/main/devices.py @@ -15,7 +15,7 @@ # limitations under the License. import abc import logging -from typing import Any, Dict, Iterable, List, Optional, Set, Tuple +from typing import Any, Collection, Dict, Iterable, List, Optional, Set, Tuple from synapse.api.errors import Codes, StoreError from synapse.logging.opentracing import ( @@ -31,7 +31,7 @@ from synapse.storage.database import ( LoggingTransaction, make_tuple_comparison_clause, ) -from synapse.types import Collection, JsonDict, get_verify_key_from_cross_signing_key +from synapse.types import JsonDict, get_verify_key_from_cross_signing_key from synapse.util import json_decoder, json_encoder from synapse.util.caches.descriptors import cached, cachedList from synapse.util.caches.lrucache import LruCache diff --git a/synapse/storage/databases/main/event_federation.py b/synapse/storage/databases/main/event_federation.py index 32ce70a396..ff81d5cd17 100644 --- a/synapse/storage/databases/main/event_federation.py +++ b/synapse/storage/databases/main/event_federation.py @@ -14,7 +14,7 @@ import itertools import logging from queue import Empty, PriorityQueue -from typing import Dict, Iterable, List, Set, Tuple +from typing import Collection, Dict, Iterable, List, Set, Tuple from synapse.api.errors import StoreError from synapse.events import EventBase @@ -25,7 +25,6 @@ from synapse.storage.databases.main.events_worker import EventsWorkerStore from synapse.storage.databases.main.signatures import SignatureWorkerStore from synapse.storage.engines import PostgresEngine from synapse.storage.types import Cursor -from synapse.types import Collection from synapse.util.caches.descriptors import cached from synapse.util.caches.lrucache import LruCache from synapse.util.iterutils import batch_iter diff --git a/synapse/storage/databases/main/events_worker.py b/synapse/storage/databases/main/events_worker.py index 64d70785b8..2c823e09cf 100644 --- a/synapse/storage/databases/main/events_worker.py +++ b/synapse/storage/databases/main/events_worker.py @@ -15,7 +15,16 @@ import logging import threading from collections import namedtuple -from typing import Container, Dict, Iterable, List, Optional, Tuple, overload +from typing import ( + Collection, + Container, + Dict, + Iterable, + List, + Optional, + Tuple, + overload, +) from constantly import NamedConstant, Names from typing_extensions import Literal @@ -45,7 +54,7 @@ from synapse.storage.database import DatabasePool from synapse.storage.engines import PostgresEngine from synapse.storage.util.id_generators import MultiWriterIdGenerator, StreamIdGenerator from synapse.storage.util.sequence import build_sequence_generator -from synapse.types import Collection, JsonDict, get_domain_from_id +from synapse.types import JsonDict, get_domain_from_id from synapse.util.caches.descriptors import cached from synapse.util.caches.lrucache import LruCache from synapse.util.iterutils import batch_iter diff --git a/synapse/storage/databases/main/roommember.py b/synapse/storage/databases/main/roommember.py index fd525dce65..bd8513cd43 100644 --- a/synapse/storage/databases/main/roommember.py +++ b/synapse/storage/databases/main/roommember.py @@ -13,7 +13,17 @@ # See the License for the specific language governing permissions and # limitations under the License. import logging -from typing import TYPE_CHECKING, Dict, FrozenSet, Iterable, List, Optional, Set, Tuple +from typing import ( + TYPE_CHECKING, + Collection, + Dict, + FrozenSet, + Iterable, + List, + Optional, + Set, + Tuple, +) from synapse.api.constants import EventTypes, Membership from synapse.events import EventBase @@ -33,7 +43,7 @@ from synapse.storage.roommember import ( ProfileInfo, RoomsForUser, ) -from synapse.types import Collection, PersistedEventPosition, get_domain_from_id +from synapse.types import PersistedEventPosition, get_domain_from_id from synapse.util.async_helpers import Linearizer from synapse.util.caches import intern_string from synapse.util.caches.descriptors import _CacheContext, cached, cachedList diff --git a/synapse/storage/databases/main/search.py b/synapse/storage/databases/main/search.py index 0276f30656..6480d5a9f5 100644 --- a/synapse/storage/databases/main/search.py +++ b/synapse/storage/databases/main/search.py @@ -15,7 +15,7 @@ import logging import re from collections import namedtuple -from typing import List, Optional, Set +from typing import Collection, List, Optional, Set from synapse.api.errors import SynapseError from synapse.events import EventBase @@ -23,7 +23,6 @@ from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_cla from synapse.storage.database import DatabasePool from synapse.storage.databases.main.events_worker import EventRedactBehaviour from synapse.storage.engines import PostgresEngine, Sqlite3Engine -from synapse.types import Collection logger = logging.getLogger(__name__) diff --git a/synapse/storage/databases/main/stream.py b/synapse/storage/databases/main/stream.py index db5ce4ea01..7581c7d3ff 100644 --- a/synapse/storage/databases/main/stream.py +++ b/synapse/storage/databases/main/stream.py @@ -37,7 +37,7 @@ what sort order was used: import abc import logging from collections import namedtuple -from typing import TYPE_CHECKING, Dict, List, Optional, Set, Tuple +from typing import TYPE_CHECKING, Collection, Dict, List, Optional, Set, Tuple from twisted.internet import defer @@ -53,7 +53,7 @@ from synapse.storage.database import ( from synapse.storage.databases.main.events_worker import EventsWorkerStore from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine from synapse.storage.util.id_generators import MultiWriterIdGenerator -from synapse.types import Collection, PersistedEventPosition, RoomStreamToken +from synapse.types import PersistedEventPosition, RoomStreamToken from synapse.util.caches.descriptors import cached from synapse.util.caches.stream_change_cache import StreamChangeCache diff --git a/synapse/storage/persist_events.py b/synapse/storage/persist_events.py index 87e040b014..33dc752d8f 100644 --- a/synapse/storage/persist_events.py +++ b/synapse/storage/persist_events.py @@ -17,7 +17,7 @@ import itertools import logging from collections import deque, namedtuple -from typing import Dict, Iterable, List, Optional, Set, Tuple +from typing import Collection, Dict, Iterable, List, Optional, Set, Tuple from prometheus_client import Counter, Histogram @@ -32,7 +32,6 @@ from synapse.storage.databases import Databases from synapse.storage.databases.main.events import DeltaState from synapse.storage.databases.main.events_worker import EventRedactBehaviour from synapse.types import ( - Collection, PersistedEventPosition, RoomStreamToken, StateMap, diff --git a/synapse/storage/prepare_database.py b/synapse/storage/prepare_database.py index 05a9355974..7a2cbee426 100644 --- a/synapse/storage/prepare_database.py +++ b/synapse/storage/prepare_database.py @@ -17,7 +17,7 @@ import logging import os import re from collections import Counter -from typing import Generator, Iterable, List, Optional, TextIO, Tuple +from typing import Collection, Generator, Iterable, List, Optional, TextIO, Tuple import attr from typing_extensions import Counter as CounterType @@ -27,7 +27,6 @@ from synapse.storage.database import LoggingDatabaseConnection from synapse.storage.engines import BaseDatabaseEngine from synapse.storage.engines.postgres import PostgresEngine from synapse.storage.types import Cursor -from synapse.types import Collection logger = logging.getLogger(__name__) diff --git a/synapse/types.py b/synapse/types.py index 21654ae686..e19f28d543 100644 --- a/synapse/types.py +++ b/synapse/types.py @@ -15,13 +15,11 @@ import abc import re import string -import sys from collections import namedtuple from typing import ( TYPE_CHECKING, Any, Dict, - Iterable, Mapping, MutableMapping, Optional, @@ -50,18 +48,6 @@ if TYPE_CHECKING: from synapse.appservice.api import ApplicationService from synapse.storage.databases.main import DataStore -# define a version of typing.Collection that works on python 3.5 -if sys.version_info[:3] >= (3, 6, 0): - from typing import Collection -else: - from typing import Container, Sized - - T_co = TypeVar("T_co", covariant=True) - - class Collection(Iterable[T_co], Container[T_co], Sized): # type: ignore - __slots__ = () - - # Define a state map type from type/state_key to T (usually an event ID or # event) T = TypeVar("T") diff --git a/synapse/util/caches/stream_change_cache.py b/synapse/util/caches/stream_change_cache.py index 0469e7d120..e81e468899 100644 --- a/synapse/util/caches/stream_change_cache.py +++ b/synapse/util/caches/stream_change_cache.py @@ -14,11 +14,10 @@ import logging import math -from typing import Dict, FrozenSet, List, Mapping, Optional, Set, Union +from typing import Collection, Dict, FrozenSet, List, Mapping, Optional, Set, Union from sortedcontainers import SortedDict -from synapse.types import Collection from synapse.util import caches logger = logging.getLogger(__name__) diff --git a/synapse/util/iterutils.py b/synapse/util/iterutils.py index 6f73b1d56d..abfdc29832 100644 --- a/synapse/util/iterutils.py +++ b/synapse/util/iterutils.py @@ -15,6 +15,7 @@ import heapq from itertools import islice from typing import ( + Collection, Dict, Generator, Iterable, @@ -26,8 +27,6 @@ from typing import ( TypeVar, ) -from synapse.types import Collection - T = TypeVar("T") -- cgit 1.5.1 From 69018acbd2d1f331d6a52335b4938c3753b16de6 Mon Sep 17 00:00:00 2001 From: Richard van der Hoff <1389908+richvdh@users.noreply.github.com> Date: Thu, 22 Apr 2021 16:53:24 +0100 Subject: Clear the resync bit after resyncing device lists (#9867) Fixes #9866. --- changelog.d/9867.bugfix | 1 + synapse/handlers/device.py | 7 +++++++ synapse/storage/databases/main/devices.py | 19 +++++++++---------- 3 files changed, 17 insertions(+), 10 deletions(-) create mode 100644 changelog.d/9867.bugfix diff --git a/changelog.d/9867.bugfix b/changelog.d/9867.bugfix new file mode 100644 index 0000000000..f236de247d --- /dev/null +++ b/changelog.d/9867.bugfix @@ -0,0 +1 @@ +Fix a bug which could cause Synapse to get stuck in a loop of resyncing device lists. diff --git a/synapse/handlers/device.py b/synapse/handlers/device.py index 34d39e3b44..95bdc5902a 100644 --- a/synapse/handlers/device.py +++ b/synapse/handlers/device.py @@ -925,6 +925,10 @@ class DeviceListUpdater: else: cached_devices = await self.store.get_cached_devices_for_user(user_id) if cached_devices == {d["device_id"]: d for d in devices}: + logging.info( + "Skipping device list resync for %s, as our cache matches already", + user_id, + ) devices = [] ignore_devices = True @@ -940,6 +944,9 @@ class DeviceListUpdater: await self.store.update_remote_device_list_cache( user_id, devices, stream_id ) + # mark the cache as valid, whether or not we actually processed any device + # list updates. + await self.store.mark_remote_user_device_cache_as_valid(user_id) device_ids = [device["device_id"] for device in devices] # Handle cross-signing keys. diff --git a/synapse/storage/databases/main/devices.py b/synapse/storage/databases/main/devices.py index 9be713399f..c9346de316 100644 --- a/synapse/storage/databases/main/devices.py +++ b/synapse/storage/databases/main/devices.py @@ -717,7 +717,15 @@ class DeviceWorkerStore(SQLBaseStore): keyvalues={"user_id": user_id}, values={}, insertion_values={"added_ts": self._clock.time_msec()}, - desc="make_remote_user_device_cache_as_stale", + desc="mark_remote_user_device_cache_as_stale", + ) + + async def mark_remote_user_device_cache_as_valid(self, user_id: str) -> None: + # Remove the database entry that says we need to resync devices, after a resync + await self.db_pool.simple_delete( + table="device_lists_remote_resync", + keyvalues={"user_id": user_id}, + desc="mark_remote_user_device_cache_as_valid", ) async def mark_remote_user_device_list_as_unsubscribed(self, user_id: str) -> None: @@ -1289,15 +1297,6 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore): lock=False, ) - # If we're replacing the remote user's device list cache presumably - # we've done a full resync, so we remove the entry that says we need - # to resync - self.db_pool.simple_delete_txn( - txn, - table="device_lists_remote_resync", - keyvalues={"user_id": user_id}, - ) - async def add_device_change_to_streams( self, user_id: str, device_ids: Collection[str], hosts: List[str] ): -- cgit 1.5.1 From 177dae270420ee4b4c8fa5e2c74c5081d98da320 Mon Sep 17 00:00:00 2001 From: Erik Johnston Date: Thu, 22 Apr 2021 17:49:11 +0100 Subject: Limit length of accepted email addresses (#9855) --- changelog.d/9855.misc | 1 + synapse/push/emailpusher.py | 9 ++++- synapse/rest/client/v2_alpha/account.py | 8 ++--- synapse/rest/client/v2_alpha/register.py | 8 +++-- synapse/util/threepids.py | 30 +++++++++++++++++ tests/rest/client/v2_alpha/test_register.py | 51 +++++++++++++++++++++++++++++ 6 files changed, 100 insertions(+), 7 deletions(-) create mode 100644 changelog.d/9855.misc diff --git a/changelog.d/9855.misc b/changelog.d/9855.misc new file mode 100644 index 0000000000..6a3d700fde --- /dev/null +++ b/changelog.d/9855.misc @@ -0,0 +1 @@ +Limit length of accepted email addresses. diff --git a/synapse/push/emailpusher.py b/synapse/push/emailpusher.py index cd89b54305..99a18874d1 100644 --- a/synapse/push/emailpusher.py +++ b/synapse/push/emailpusher.py @@ -19,8 +19,9 @@ from twisted.internet.error import AlreadyCalled, AlreadyCancelled from twisted.internet.interfaces import IDelayedCall from synapse.metrics.background_process_metrics import run_as_background_process -from synapse.push import Pusher, PusherConfig, ThrottleParams +from synapse.push import Pusher, PusherConfig, PusherConfigException, ThrottleParams from synapse.push.mailer import Mailer +from synapse.util.threepids import validate_email if TYPE_CHECKING: from synapse.server import HomeServer @@ -71,6 +72,12 @@ class EmailPusher(Pusher): self._is_processing = False + # Make sure that the email is valid. + try: + validate_email(self.email) + except ValueError: + raise PusherConfigException("Invalid email") + def on_started(self, should_check_for_notifs: bool) -> None: """Called when this pusher has been started. diff --git a/synapse/rest/client/v2_alpha/account.py b/synapse/rest/client/v2_alpha/account.py index 3aad15132d..085561d3e9 100644 --- a/synapse/rest/client/v2_alpha/account.py +++ b/synapse/rest/client/v2_alpha/account.py @@ -39,7 +39,7 @@ from synapse.metrics import threepid_send_requests from synapse.push.mailer import Mailer from synapse.util.msisdn import phone_number_to_msisdn from synapse.util.stringutils import assert_valid_client_secret, random_string -from synapse.util.threepids import canonicalise_email, check_3pid_allowed +from synapse.util.threepids import check_3pid_allowed, validate_email from ._base import client_patterns, interactive_auth_handler @@ -92,7 +92,7 @@ class EmailPasswordRequestTokenRestServlet(RestServlet): # Stored in the database "foo@bar.com" # User requests with "FOO@bar.com" would raise a Not Found error try: - email = canonicalise_email(body["email"]) + email = validate_email(body["email"]) except ValueError as e: raise SynapseError(400, str(e)) send_attempt = body["send_attempt"] @@ -247,7 +247,7 @@ class PasswordRestServlet(RestServlet): # We store all email addresses canonicalised in the DB. # (See add_threepid in synapse/handlers/auth.py) try: - threepid["address"] = canonicalise_email(threepid["address"]) + threepid["address"] = validate_email(threepid["address"]) except ValueError as e: raise SynapseError(400, str(e)) # if using email, we must know about the email they're authing with! @@ -375,7 +375,7 @@ class EmailThreepidRequestTokenRestServlet(RestServlet): # Otherwise the email will be sent to "FOO@bar.com" and stored as # "foo@bar.com" in database. try: - email = canonicalise_email(body["email"]) + email = validate_email(body["email"]) except ValueError as e: raise SynapseError(400, str(e)) send_attempt = body["send_attempt"] diff --git a/synapse/rest/client/v2_alpha/register.py b/synapse/rest/client/v2_alpha/register.py index c5a6800b8a..a30a5df1b1 100644 --- a/synapse/rest/client/v2_alpha/register.py +++ b/synapse/rest/client/v2_alpha/register.py @@ -49,7 +49,11 @@ from synapse.push.mailer import Mailer from synapse.util.msisdn import phone_number_to_msisdn from synapse.util.ratelimitutils import FederationRateLimiter from synapse.util.stringutils import assert_valid_client_secret, random_string -from synapse.util.threepids import canonicalise_email, check_3pid_allowed +from synapse.util.threepids import ( + canonicalise_email, + check_3pid_allowed, + validate_email, +) from ._base import client_patterns, interactive_auth_handler @@ -111,7 +115,7 @@ class EmailRegisterRequestTokenRestServlet(RestServlet): # (See on_POST in EmailThreepidRequestTokenRestServlet # in synapse/rest/client/v2_alpha/account.py) try: - email = canonicalise_email(body["email"]) + email = validate_email(body["email"]) except ValueError as e: raise SynapseError(400, str(e)) send_attempt = body["send_attempt"] diff --git a/synapse/util/threepids.py b/synapse/util/threepids.py index 281c5be4fb..a1cf1960b0 100644 --- a/synapse/util/threepids.py +++ b/synapse/util/threepids.py @@ -18,6 +18,16 @@ import re logger = logging.getLogger(__name__) +# it's unclear what the maximum length of an email address is. RFC3696 (as corrected +# by errata) says: +# the upper limit on address lengths should normally be considered to be 254. +# +# In practice, mail servers appear to be more tolerant and allow 400 characters +# or so. Let's allow 500, which should be plenty for everyone. +# +MAX_EMAIL_ADDRESS_LENGTH = 500 + + def check_3pid_allowed(hs, medium, address): """Checks whether a given format of 3PID is allowed to be used on this HS @@ -70,3 +80,23 @@ def canonicalise_email(address: str) -> str: raise ValueError("Unable to parse email address") return parts[0].casefold() + "@" + parts[1].lower() + + +def validate_email(address: str) -> str: + """Does some basic validation on an email address. + + Returns the canonicalised email, as returned by `canonicalise_email`. + + Raises a ValueError if the email is invalid. + """ + # First we try canonicalising in case that fails + address = canonicalise_email(address) + + # Email addresses have to be at least 3 characters. + if len(address) < 3: + raise ValueError("Unable to parse email address") + + if len(address) > MAX_EMAIL_ADDRESS_LENGTH: + raise ValueError("Unable to parse email address") + + return address diff --git a/tests/rest/client/v2_alpha/test_register.py b/tests/rest/client/v2_alpha/test_register.py index 98695b05d5..1cad5f00eb 100644 --- a/tests/rest/client/v2_alpha/test_register.py +++ b/tests/rest/client/v2_alpha/test_register.py @@ -310,6 +310,57 @@ class RegisterRestServletTestCase(unittest.HomeserverTestCase): self.assertIsNotNone(channel.json_body.get("sid")) + @unittest.override_config( + { + "public_baseurl": "https://test_server", + "email": { + "smtp_host": "mail_server", + "smtp_port": 2525, + "notif_from": "sender@host", + }, + } + ) + def test_reject_invalid_email(self): + """Check that bad emails are rejected""" + + # Test for email with multiple @ + channel = self.make_request( + "POST", + b"register/email/requestToken", + {"client_secret": "foobar", "email": "email@@email", "send_attempt": 1}, + ) + self.assertEquals(400, channel.code, channel.result) + # Check error to ensure that we're not erroring due to a bug in the test. + self.assertEquals( + channel.json_body, + {"errcode": "M_UNKNOWN", "error": "Unable to parse email address"}, + ) + + # Test for email with no @ + channel = self.make_request( + "POST", + b"register/email/requestToken", + {"client_secret": "foobar", "email": "email", "send_attempt": 1}, + ) + self.assertEquals(400, channel.code, channel.result) + self.assertEquals( + channel.json_body, + {"errcode": "M_UNKNOWN", "error": "Unable to parse email address"}, + ) + + # Test for super long email + email = "a@" + "a" * 1000 + channel = self.make_request( + "POST", + b"register/email/requestToken", + {"client_secret": "foobar", "email": email, "send_attempt": 1}, + ) + self.assertEquals(400, channel.code, channel.result) + self.assertEquals( + channel.json_body, + {"errcode": "M_UNKNOWN", "error": "Unable to parse email address"}, + ) + class AccountValidityTestCase(unittest.HomeserverTestCase): -- cgit 1.5.1 From c1ddbbde4fb948cf740d4c59869157943d3711c6 Mon Sep 17 00:00:00 2001 From: manuroe Date: Thu, 22 Apr 2021 18:49:42 +0200 Subject: Handle all new rate limits in demo scripts (#9858) --- changelog.d/9858.misc | 1 + demo/start.sh | 54 +++++++++++++++++++++++++++++++++++++++------------ 2 files changed, 43 insertions(+), 12 deletions(-) create mode 100644 changelog.d/9858.misc diff --git a/changelog.d/9858.misc b/changelog.d/9858.misc new file mode 100644 index 0000000000..f7e286fa69 --- /dev/null +++ b/changelog.d/9858.misc @@ -0,0 +1 @@ +Handle recently added rate limits correctly when using `--no-rate-limit` with the demo scripts. diff --git a/demo/start.sh b/demo/start.sh index 621a5698b8..bc4854091b 100755 --- a/demo/start.sh +++ b/demo/start.sh @@ -96,18 +96,48 @@ for port in 8080 8081 8082; do # Check script parameters if [ $# -eq 1 ]; then if [ $1 = "--no-rate-limit" ]; then - # messages rate limit - echo 'rc_messages_per_second: 1000' >> $DIR/etc/$port.config - echo 'rc_message_burst_count: 1000' >> $DIR/etc/$port.config - - # registration rate limit - printf 'rc_registration:\n per_second: 1000\n burst_count: 1000\n' >> $DIR/etc/$port.config - - # login rate limit - echo 'rc_login:' >> $DIR/etc/$port.config - printf ' address:\n per_second: 1000\n burst_count: 1000\n' >> $DIR/etc/$port.config - printf ' account:\n per_second: 1000\n burst_count: 1000\n' >> $DIR/etc/$port.config - printf ' failed_attempts:\n per_second: 1000\n burst_count: 1000\n' >> $DIR/etc/$port.config + + # Disable any rate limiting + ratelimiting=$(cat <<-RC + rc_message: + per_second: 1000 + burst_count: 1000 + rc_registration: + per_second: 1000 + burst_count: 1000 + rc_login: + address: + per_second: 1000 + burst_count: 1000 + account: + per_second: 1000 + burst_count: 1000 + failed_attempts: + per_second: 1000 + burst_count: 1000 + rc_admin_redaction: + per_second: 1000 + burst_count: 1000 + rc_joins: + local: + per_second: 1000 + burst_count: 1000 + remote: + per_second: 1000 + burst_count: 1000 + rc_3pid_validation: + per_second: 1000 + burst_count: 1000 + rc_invites: + per_room: + per_second: 1000 + burst_count: 1000 + per_user: + per_second: 1000 + burst_count: 1000 + RC + ) + echo "${ratelimiting}" >> $DIR/etc/$port.config fi fi -- cgit 1.5.1 From 51a20914a863ac24387c424ccee14aa877e218f8 Mon Sep 17 00:00:00 2001 From: Richard van der Hoff <1389908+richvdh@users.noreply.github.com> Date: Fri, 23 Apr 2021 11:08:41 +0100 Subject: Limit the size of HTTP responses read over federation. (#9833) --- changelog.d/9833.bugfix | 1 + synapse/http/client.py | 15 +++++++-- synapse/http/matrixfederationclient.py | 43 +++++++++++++++++++++---- tests/http/test_fedclient.py | 59 ++++++++++++++++++++++++++++++++++ 4 files changed, 110 insertions(+), 8 deletions(-) create mode 100644 changelog.d/9833.bugfix diff --git a/changelog.d/9833.bugfix b/changelog.d/9833.bugfix new file mode 100644 index 0000000000..56f9c9626b --- /dev/null +++ b/changelog.d/9833.bugfix @@ -0,0 +1 @@ +Limit the size of HTTP responses read over federation. diff --git a/synapse/http/client.py b/synapse/http/client.py index 1730187ffa..5f40f16e24 100644 --- a/synapse/http/client.py +++ b/synapse/http/client.py @@ -33,6 +33,7 @@ import treq from canonicaljson import encode_canonical_json from netaddr import AddrFormatError, IPAddress, IPSet from prometheus_client import Counter +from typing_extensions import Protocol from zope.interface import implementer, provider from OpenSSL import SSL @@ -754,6 +755,16 @@ def _timeout_to_request_timed_out_error(f: Failure): return f +class ByteWriteable(Protocol): + """The type of object which must be passed into read_body_with_max_size. + + Typically this is a file object. + """ + + def write(self, data: bytes) -> int: + pass + + class BodyExceededMaxSize(Exception): """The maximum allowed size of the HTTP body was exceeded.""" @@ -790,7 +801,7 @@ class _ReadBodyWithMaxSizeProtocol(protocol.Protocol): transport = None # type: Optional[ITCPTransport] def __init__( - self, stream: BinaryIO, deferred: defer.Deferred, max_size: Optional[int] + self, stream: ByteWriteable, deferred: defer.Deferred, max_size: Optional[int] ): self.stream = stream self.deferred = deferred @@ -830,7 +841,7 @@ class _ReadBodyWithMaxSizeProtocol(protocol.Protocol): def read_body_with_max_size( - response: IResponse, stream: BinaryIO, max_size: Optional[int] + response: IResponse, stream: ByteWriteable, max_size: Optional[int] ) -> defer.Deferred: """ Read a HTTP response body to a file-object. Optionally enforcing a maximum file size. diff --git a/synapse/http/matrixfederationclient.py b/synapse/http/matrixfederationclient.py index d48721a4e2..bb837b7b19 100644 --- a/synapse/http/matrixfederationclient.py +++ b/synapse/http/matrixfederationclient.py @@ -1,5 +1,4 @@ -# Copyright 2014-2016 OpenMarket Ltd -# Copyright 2018 New Vector Ltd +# Copyright 2014-2021 The Matrix.org Foundation C.I.C. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -13,11 +12,13 @@ # See the License for the specific language governing permissions and # limitations under the License. import cgi +import codecs import logging import random import sys +import typing import urllib.parse -from io import BytesIO +from io import BytesIO, StringIO from typing import Callable, Dict, List, Optional, Tuple, Union import attr @@ -72,6 +73,9 @@ incoming_responses_counter = Counter( "synapse_http_matrixfederationclient_responses", "", ["method", "code"] ) +# a federation response can be rather large (eg a big state_ids is 50M or so), so we +# need a generous limit here. +MAX_RESPONSE_SIZE = 100 * 1024 * 1024 MAX_LONG_RETRIES = 10 MAX_SHORT_RETRIES = 3 @@ -167,12 +171,27 @@ async def _handle_json_response( try: check_content_type_is_json(response.headers) - # Use the custom JSON decoder (partially re-implements treq.json_content). - d = treq.text_content(response, encoding="utf-8") - d.addCallback(json_decoder.decode) + buf = StringIO() + d = read_body_with_max_size(response, BinaryIOWrapper(buf), MAX_RESPONSE_SIZE) d = timeout_deferred(d, timeout=timeout_sec, reactor=reactor) + def parse(_len: int): + return json_decoder.decode(buf.getvalue()) + + d.addCallback(parse) + body = await make_deferred_yieldable(d) + except BodyExceededMaxSize as e: + # The response was too big. + logger.warning( + "{%s} [%s] JSON response exceeded max size %i - %s %s", + request.txn_id, + request.destination, + MAX_RESPONSE_SIZE, + request.method, + request.uri.decode("ascii"), + ) + raise RequestSendFailed(e, can_retry=False) from e except ValueError as e: # The JSON content was invalid. logger.warning( @@ -218,6 +237,18 @@ async def _handle_json_response( return body +class BinaryIOWrapper: + """A wrapper for a TextIO which converts from bytes on the fly.""" + + def __init__(self, file: typing.TextIO, encoding="utf-8", errors="strict"): + self.decoder = codecs.getincrementaldecoder(encoding)(errors) + self.file = file + + def write(self, b: Union[bytes, bytearray]) -> int: + self.file.write(self.decoder.decode(b)) + return len(b) + + class MatrixFederationHttpClient: """HTTP client used to talk to other homeservers over the federation protocol. Send client certificates and signs requests. diff --git a/tests/http/test_fedclient.py b/tests/http/test_fedclient.py index 9e97185507..ed9a884d76 100644 --- a/tests/http/test_fedclient.py +++ b/tests/http/test_fedclient.py @@ -26,6 +26,7 @@ from twisted.web.http import HTTPChannel from synapse.api.errors import RequestSendFailed from synapse.http.matrixfederationclient import ( + MAX_RESPONSE_SIZE, MatrixFederationHttpClient, MatrixFederationRequest, ) @@ -560,3 +561,61 @@ class FederationClientTests(HomeserverTestCase): f = self.failureResultOf(test_d) self.assertIsInstance(f.value, RequestSendFailed) + + def test_too_big(self): + """ + Test what happens if a huge response is returned from the remote endpoint. + """ + + test_d = defer.ensureDeferred(self.cl.get_json("testserv:8008", "foo/bar")) + + self.pump() + + # Nothing happened yet + self.assertNoResult(test_d) + + # Make sure treq is trying to connect + clients = self.reactor.tcpClients + self.assertEqual(len(clients), 1) + (host, port, factory, _timeout, _bindAddress) = clients[0] + self.assertEqual(host, "1.2.3.4") + self.assertEqual(port, 8008) + + # complete the connection and wire it up to a fake transport + protocol = factory.buildProtocol(None) + transport = StringTransport() + protocol.makeConnection(transport) + + # that should have made it send the request to the transport + self.assertRegex(transport.value(), b"^GET /foo/bar") + self.assertRegex(transport.value(), b"Host: testserv:8008") + + # Deferred is still without a result + self.assertNoResult(test_d) + + # Send it a huge HTTP response + protocol.dataReceived( + b"HTTP/1.1 200 OK\r\n" + b"Server: Fake\r\n" + b"Content-Type: application/json\r\n" + b"\r\n" + ) + + self.pump() + + # should still be waiting + self.assertNoResult(test_d) + + sent = 0 + chunk_size = 1024 * 512 + while not test_d.called: + protocol.dataReceived(b"a" * chunk_size) + sent += chunk_size + self.assertLessEqual(sent, MAX_RESPONSE_SIZE) + + self.assertEqual(sent, MAX_RESPONSE_SIZE) + + f = self.failureResultOf(test_d) + self.assertIsInstance(f.value, RequestSendFailed) + + self.assertTrue(transport.disconnecting) -- cgit 1.5.1 From 3853a7edfcee1c00ba4df04b06821397e1155257 Mon Sep 17 00:00:00 2001 From: Erik Johnston Date: Fri, 23 Apr 2021 11:47:07 +0100 Subject: Only store data in caches, not "smart" objects (#9845) --- changelog.d/9845.misc | 1 + synapse/push/bulk_push_rule_evaluator.py | 161 +++++++++++++++------------ synapse/storage/databases/main/roommember.py | 161 +++++++++++++++------------ 3 files changed, 182 insertions(+), 141 deletions(-) create mode 100644 changelog.d/9845.misc diff --git a/changelog.d/9845.misc b/changelog.d/9845.misc new file mode 100644 index 0000000000..875dd6d131 --- /dev/null +++ b/changelog.d/9845.misc @@ -0,0 +1 @@ +Only store the raw data in the in-memory caches, rather than objects that include references to e.g. the data stores. diff --git a/synapse/push/bulk_push_rule_evaluator.py b/synapse/push/bulk_push_rule_evaluator.py index 50b470c310..350646f458 100644 --- a/synapse/push/bulk_push_rule_evaluator.py +++ b/synapse/push/bulk_push_rule_evaluator.py @@ -106,6 +106,10 @@ class BulkPushRuleEvaluator: self.store = hs.get_datastore() self.auth = hs.get_auth() + # Used by `RulesForRoom` to ensure only one thing mutates the cache at a + # time. Keyed off room_id. + self._rules_linearizer = Linearizer(name="rules_for_room") + self.room_push_rule_cache_metrics = register_cache( "cache", "room_push_rule_cache", @@ -123,7 +127,16 @@ class BulkPushRuleEvaluator: dict of user_id -> push_rules """ room_id = event.room_id - rules_for_room = self._get_rules_for_room(room_id) + + rules_for_room_data = self._get_rules_for_room(room_id) + rules_for_room = RulesForRoom( + hs=self.hs, + room_id=room_id, + rules_for_room_cache=self._get_rules_for_room.cache, + room_push_rule_cache_metrics=self.room_push_rule_cache_metrics, + linearizer=self._rules_linearizer, + cached_data=rules_for_room_data, + ) rules_by_user = await rules_for_room.get_rules(event, context) @@ -142,17 +155,12 @@ class BulkPushRuleEvaluator: return rules_by_user @lru_cache() - def _get_rules_for_room(self, room_id: str) -> "RulesForRoom": - """Get the current RulesForRoom object for the given room id""" - # It's important that RulesForRoom gets added to self._get_rules_for_room.cache + def _get_rules_for_room(self, room_id: str) -> "RulesForRoomData": + """Get the current RulesForRoomData object for the given room id""" + # It's important that the RulesForRoomData object gets added to self._get_rules_for_room.cache # before any lookup methods get called on it as otherwise there may be # a race if invalidate_all gets called (which assumes its in the cache) - return RulesForRoom( - self.hs, - room_id, - self._get_rules_for_room.cache, - self.room_push_rule_cache_metrics, - ) + return RulesForRoomData() async def _get_power_levels_and_sender_level( self, event: EventBase, context: EventContext @@ -282,11 +290,49 @@ def _condition_checker( return True +@attr.s(slots=True) +class RulesForRoomData: + """The data stored in the cache by `RulesForRoom`. + + We don't store `RulesForRoom` directly in the cache as we want our caches to + *only* include data, and not references to e.g. the data stores. + """ + + # event_id -> (user_id, state) + member_map = attr.ib(type=Dict[str, Tuple[str, str]], factory=dict) + # user_id -> rules + rules_by_user = attr.ib(type=Dict[str, List[Dict[str, dict]]], factory=dict) + + # The last state group we updated the caches for. If the state_group of + # a new event comes along, we know that we can just return the cached + # result. + # On invalidation of the rules themselves (if the user changes them), + # we invalidate everything and set state_group to `object()` + state_group = attr.ib(type=Union[object, int], factory=object) + + # A sequence number to keep track of when we're allowed to update the + # cache. We bump the sequence number when we invalidate the cache. If + # the sequence number changes while we're calculating stuff we should + # not update the cache with it. + sequence = attr.ib(type=int, default=0) + + # A cache of user_ids that we *know* aren't interesting, e.g. user_ids + # owned by AS's, or remote users, etc. (I.e. users we will never need to + # calculate push for) + # These never need to be invalidated as we will never set up push for + # them. + uninteresting_user_set = attr.ib(type=Set[str], factory=set) + + class RulesForRoom: """Caches push rules for users in a room. This efficiently handles users joining/leaving the room by not invalidating the entire cache for the room. + + A new instance is constructed for each call to + `BulkPushRuleEvaluator._get_rules_for_event`, with the cached data from + previous calls passed in. """ def __init__( @@ -295,6 +341,8 @@ class RulesForRoom: room_id: str, rules_for_room_cache: LruCache, room_push_rule_cache_metrics: CacheMetric, + linearizer: Linearizer, + cached_data: RulesForRoomData, ): """ Args: @@ -303,38 +351,21 @@ class RulesForRoom: rules_for_room_cache: The cache object that caches these RoomsForUser objects. room_push_rule_cache_metrics: The metrics object + linearizer: The linearizer used to ensure only one thing mutates + the cache at a time. Keyed off room_id + cached_data: Cached data from previous calls to `self.get_rules`, + can be mutated. """ self.room_id = room_id self.is_mine_id = hs.is_mine_id self.store = hs.get_datastore() self.room_push_rule_cache_metrics = room_push_rule_cache_metrics - self.linearizer = Linearizer(name="rules_for_room") - - # event_id -> (user_id, state) - self.member_map = {} # type: Dict[str, Tuple[str, str]] - # user_id -> rules - self.rules_by_user = {} # type: Dict[str, List[Dict[str, dict]]] - - # The last state group we updated the caches for. If the state_group of - # a new event comes along, we know that we can just return the cached - # result. - # On invalidation of the rules themselves (if the user changes them), - # we invalidate everything and set state_group to `object()` - self.state_group = object() - - # A sequence number to keep track of when we're allowed to update the - # cache. We bump the sequence number when we invalidate the cache. If - # the sequence number changes while we're calculating stuff we should - # not update the cache with it. - self.sequence = 0 - - # A cache of user_ids that we *know* aren't interesting, e.g. user_ids - # owned by AS's, or remote users, etc. (I.e. users we will never need to - # calculate push for) - # These never need to be invalidated as we will never set up push for - # them. - self.uninteresting_user_set = set() # type: Set[str] + # Used to ensure only one thing mutates the cache at a time. Keyed off + # room_id. + self.linearizer = linearizer + + self.data = cached_data # We need to be clever on the invalidating caches callbacks, as # otherwise the invalidation callback holds a reference to the object, @@ -352,25 +383,25 @@ class RulesForRoom: """ state_group = context.state_group - if state_group and self.state_group == state_group: + if state_group and self.data.state_group == state_group: logger.debug("Using cached rules for %r", self.room_id) self.room_push_rule_cache_metrics.inc_hits() - return self.rules_by_user + return self.data.rules_by_user - with (await self.linearizer.queue(())): - if state_group and self.state_group == state_group: + with (await self.linearizer.queue(self.room_id)): + if state_group and self.data.state_group == state_group: logger.debug("Using cached rules for %r", self.room_id) self.room_push_rule_cache_metrics.inc_hits() - return self.rules_by_user + return self.data.rules_by_user self.room_push_rule_cache_metrics.inc_misses() ret_rules_by_user = {} missing_member_event_ids = {} - if state_group and self.state_group == context.prev_group: + if state_group and self.data.state_group == context.prev_group: # If we have a simple delta then we can reuse most of the previous # results. - ret_rules_by_user = self.rules_by_user + ret_rules_by_user = self.data.rules_by_user current_state_ids = context.delta_ids push_rules_delta_state_cache_metric.inc_hits() @@ -393,24 +424,24 @@ class RulesForRoom: if typ != EventTypes.Member: continue - if user_id in self.uninteresting_user_set: + if user_id in self.data.uninteresting_user_set: continue if not self.is_mine_id(user_id): - self.uninteresting_user_set.add(user_id) + self.data.uninteresting_user_set.add(user_id) continue if self.store.get_if_app_services_interested_in_user(user_id): - self.uninteresting_user_set.add(user_id) + self.data.uninteresting_user_set.add(user_id) continue event_id = current_state_ids[key] - res = self.member_map.get(event_id, None) + res = self.data.member_map.get(event_id, None) if res: user_id, state = res if state == Membership.JOIN: - rules = self.rules_by_user.get(user_id, None) + rules = self.data.rules_by_user.get(user_id, None) if rules: ret_rules_by_user[user_id] = rules continue @@ -430,7 +461,7 @@ class RulesForRoom: else: # The push rules didn't change but lets update the cache anyway self.update_cache( - self.sequence, + self.data.sequence, members={}, # There were no membership changes rules_by_user=ret_rules_by_user, state_group=state_group, @@ -461,7 +492,7 @@ class RulesForRoom: for. Used when updating the cache. event: The event we are currently computing push rules for. """ - sequence = self.sequence + sequence = self.data.sequence rows = await self.store.get_membership_from_event_ids(member_event_ids.values()) @@ -501,23 +532,11 @@ class RulesForRoom: self.update_cache(sequence, members, ret_rules_by_user, state_group) - def invalidate_all(self) -> None: - # Note: Don't hand this function directly to an invalidation callback - # as it keeps a reference to self and will stop this instance from being - # GC'd if it gets dropped from the rules_to_user cache. Instead use - # `self.invalidate_all_cb` - logger.debug("Invalidating RulesForRoom for %r", self.room_id) - self.sequence += 1 - self.state_group = object() - self.member_map = {} - self.rules_by_user = {} - push_rules_invalidation_counter.inc() - def update_cache(self, sequence, members, rules_by_user, state_group) -> None: - if sequence == self.sequence: - self.member_map.update(members) - self.rules_by_user = rules_by_user - self.state_group = state_group + if sequence == self.data.sequence: + self.data.member_map.update(members) + self.data.rules_by_user = rules_by_user + self.data.state_group = state_group @attr.attrs(slots=True, frozen=True) @@ -535,6 +554,10 @@ class _Invalidation: room_id = attr.ib(type=str) def __call__(self) -> None: - rules = self.cache.get(self.room_id, None, update_metrics=False) - if rules: - rules.invalidate_all() + rules_data = self.cache.get(self.room_id, None, update_metrics=False) + if rules_data: + rules_data.sequence += 1 + rules_data.state_group = object() + rules_data.member_map = {} + rules_data.rules_by_user = {} + push_rules_invalidation_counter.inc() diff --git a/synapse/storage/databases/main/roommember.py b/synapse/storage/databases/main/roommember.py index bd8513cd43..2a8532f8c1 100644 --- a/synapse/storage/databases/main/roommember.py +++ b/synapse/storage/databases/main/roommember.py @@ -23,8 +23,11 @@ from typing import ( Optional, Set, Tuple, + Union, ) +import attr + from synapse.api.constants import EventTypes, Membership from synapse.events import EventBase from synapse.events.snapshot import EventContext @@ -43,7 +46,7 @@ from synapse.storage.roommember import ( ProfileInfo, RoomsForUser, ) -from synapse.types import PersistedEventPosition, get_domain_from_id +from synapse.types import PersistedEventPosition, StateMap, get_domain_from_id from synapse.util.async_helpers import Linearizer from synapse.util.caches import intern_string from synapse.util.caches.descriptors import _CacheContext, cached, cachedList @@ -63,6 +66,10 @@ class RoomMemberWorkerStore(EventsWorkerStore): def __init__(self, database: DatabasePool, db_conn, hs): super().__init__(database, db_conn, hs) + # Used by `_get_joined_hosts` to ensure only one thing mutates the cache + # at a time. Keyed by room_id. + self._joined_host_linearizer = Linearizer("_JoinedHostsCache") + # Is the current_state_events.membership up to date? Or is the # background update still running? self._current_state_events_membership_up_to_date = False @@ -740,19 +747,82 @@ class RoomMemberWorkerStore(EventsWorkerStore): @cached(num_args=2, max_entries=10000, iterable=True) async def _get_joined_hosts( - self, room_id, state_group, current_state_ids, state_entry - ): - # We don't use `state_group`, its there so that we can cache based - # on it. However, its important that its never None, since two current_state's - # with a state_group of None are likely to be different. + self, + room_id: str, + state_group: int, + current_state_ids: StateMap[str], + state_entry: "_StateCacheEntry", + ) -> FrozenSet[str]: + # We don't use `state_group`, its there so that we can cache based on + # it. However, its important that its never None, since two + # current_state's with a state_group of None are likely to be different. + # + # The `state_group` must match the `state_entry.state_group` (if not None). assert state_group is not None - + assert state_entry.state_group is None or state_entry.state_group == state_group + + # We use a secondary cache of previous work to allow us to build up the + # joined hosts for the given state group based on previous state groups. + # + # We cache one object per room containing the results of the last state + # group we got joined hosts for. The idea is that generally + # `get_joined_hosts` is called with the "current" state group for the + # room, and so consecutive calls will be for consecutive state groups + # which point to the previous state group. cache = await self._get_joined_hosts_cache(room_id) - return await cache.get_destinations(state_entry) + + # If the state group in the cache matches, we already have the data we need. + if state_entry.state_group == cache.state_group: + return frozenset(cache.hosts_to_joined_users) + + # Since we'll mutate the cache we need to lock. + with (await self._joined_host_linearizer.queue(room_id)): + if state_entry.state_group == cache.state_group: + # Same state group, so nothing to do. We've already checked for + # this above, but the cache may have changed while waiting on + # the lock. + pass + elif state_entry.prev_group == cache.state_group: + # The cached work is for the previous state group, so we work out + # the delta. + for (typ, state_key), event_id in state_entry.delta_ids.items(): + if typ != EventTypes.Member: + continue + + host = intern_string(get_domain_from_id(state_key)) + user_id = state_key + known_joins = cache.hosts_to_joined_users.setdefault(host, set()) + + event = await self.get_event(event_id) + if event.membership == Membership.JOIN: + known_joins.add(user_id) + else: + known_joins.discard(user_id) + + if not known_joins: + cache.hosts_to_joined_users.pop(host, None) + else: + # The cache doesn't match the state group or prev state group, + # so we calculate the result from first principles. + joined_users = await self.get_joined_users_from_state( + room_id, state_entry + ) + + cache.hosts_to_joined_users = {} + for user_id in joined_users: + host = intern_string(get_domain_from_id(user_id)) + cache.hosts_to_joined_users.setdefault(host, set()).add(user_id) + + if state_entry.state_group: + cache.state_group = state_entry.state_group + else: + cache.state_group = object() + + return frozenset(cache.hosts_to_joined_users) @cached(max_entries=10000) def _get_joined_hosts_cache(self, room_id: str) -> "_JoinedHostsCache": - return _JoinedHostsCache(self, room_id) + return _JoinedHostsCache() @cached(num_args=2) async def did_forget(self, user_id: str, room_id: str) -> bool: @@ -1062,71 +1132,18 @@ class RoomMemberStore(RoomMemberWorkerStore, RoomMemberBackgroundUpdateStore): await self.db_pool.runInteraction("forget_membership", f) +@attr.s(slots=True) class _JoinedHostsCache: - """Cache for joined hosts in a room that is optimised to handle updates - via state deltas. - """ - - def __init__(self, store, room_id): - self.store = store - self.room_id = room_id + """The cached data used by the `_get_joined_hosts_cache`.""" - self.hosts_to_joined_users = {} + # Dict of host to the set of their users in the room at the state group. + hosts_to_joined_users = attr.ib(type=Dict[str, Set[str]], factory=dict) - self.state_group = object() - - self.linearizer = Linearizer("_JoinedHostsCache") - - self._len = 0 - - async def get_destinations(self, state_entry: "_StateCacheEntry") -> Set[str]: - """Get set of destinations for a state entry - - Args: - state_entry - - Returns: - The destinations as a set. - """ - if state_entry.state_group == self.state_group: - return frozenset(self.hosts_to_joined_users) - - with (await self.linearizer.queue(())): - if state_entry.state_group == self.state_group: - pass - elif state_entry.prev_group == self.state_group: - for (typ, state_key), event_id in state_entry.delta_ids.items(): - if typ != EventTypes.Member: - continue - - host = intern_string(get_domain_from_id(state_key)) - user_id = state_key - known_joins = self.hosts_to_joined_users.setdefault(host, set()) - - event = await self.store.get_event(event_id) - if event.membership == Membership.JOIN: - known_joins.add(user_id) - else: - known_joins.discard(user_id) - - if not known_joins: - self.hosts_to_joined_users.pop(host, None) - else: - joined_users = await self.store.get_joined_users_from_state( - self.room_id, state_entry - ) - - self.hosts_to_joined_users = {} - for user_id in joined_users: - host = intern_string(get_domain_from_id(user_id)) - self.hosts_to_joined_users.setdefault(host, set()).add(user_id) - - if state_entry.state_group: - self.state_group = state_entry.state_group - else: - self.state_group = object() - self._len = sum(len(v) for v in self.hosts_to_joined_users.values()) - return frozenset(self.hosts_to_joined_users) + # The state group `hosts_to_joined_users` is derived from. Will be an object + # if the instance is newly created or if the state is not based on a state + # group. (An object is used as a sentinel value to ensure that it never is + # equal to anything else). + state_group = attr.ib(type=Union[object, int], factory=object) def __len__(self): - return self._len + return sum(len(v) for v in self.hosts_to_joined_users.values()) -- cgit 1.5.1 From d924827da1db5d210eb06db2247a1403ed4c8b9a Mon Sep 17 00:00:00 2001 From: Patrick Cloke Date: Fri, 23 Apr 2021 07:05:51 -0400 Subject: Check for space membership during a remote join of a restricted room (#9814) When receiving a /send_join request for a room with join rules set to 'restricted', check if the user is a member of the spaces defined in the 'allow' key of the join rules. This only applies to an experimental room version, as defined in MSC3083. --- changelog.d/9814.feature | 1 + synapse/api/auth.py | 1 + synapse/handlers/event_auth.py | 86 +++++++++++++++++++++++++++++++++++++++++ synapse/handlers/federation.py | 44 ++++++++++++++++----- synapse/handlers/room_member.py | 62 ++--------------------------- synapse/server.py | 5 +++ 6 files changed, 131 insertions(+), 68 deletions(-) create mode 100644 changelog.d/9814.feature create mode 100644 synapse/handlers/event_auth.py diff --git a/changelog.d/9814.feature b/changelog.d/9814.feature new file mode 100644 index 0000000000..9404ad2fc0 --- /dev/null +++ b/changelog.d/9814.feature @@ -0,0 +1 @@ +Update experimental support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083): restricting room access via group membership. diff --git a/synapse/api/auth.py b/synapse/api/auth.py index 872fd100cd..2d845d0d5c 100644 --- a/synapse/api/auth.py +++ b/synapse/api/auth.py @@ -65,6 +65,7 @@ class Auth: """ FIXME: This class contains a mix of functions for authenticating users of our client-server API and authenticating events added to room graphs. + The latter should be moved to synapse.handlers.event_auth.EventAuthHandler. """ def __init__(self, hs): diff --git a/synapse/handlers/event_auth.py b/synapse/handlers/event_auth.py new file mode 100644 index 0000000000..eff639f407 --- /dev/null +++ b/synapse/handlers/event_auth.py @@ -0,0 +1,86 @@ +# Copyright 2021 The Matrix.org Foundation C.I.C. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from typing import TYPE_CHECKING + +from synapse.api.constants import EventTypes, JoinRules +from synapse.api.room_versions import RoomVersion +from synapse.types import StateMap + +if TYPE_CHECKING: + from synapse.server import HomeServer + + +class EventAuthHandler: + """ + This class contains methods for authenticating events added to room graphs. + """ + + def __init__(self, hs: "HomeServer"): + self._store = hs.get_datastore() + + async def can_join_without_invite( + self, state_ids: StateMap[str], room_version: RoomVersion, user_id: str + ) -> bool: + """ + Check whether a user can join a room without an invite. + + When joining a room with restricted joined rules (as defined in MSC3083), + the membership of spaces must be checked during join. + + Args: + state_ids: The state of the room as it currently is. + room_version: The room version of the room being joined. + user_id: The user joining the room. + + Returns: + True if the user can join the room, false otherwise. + """ + # This only applies to room versions which support the new join rule. + if not room_version.msc3083_join_rules: + return True + + # If there's no join rule, then it defaults to invite (so this doesn't apply). + join_rules_event_id = state_ids.get((EventTypes.JoinRules, ""), None) + if not join_rules_event_id: + return True + + # If the join rule is not restricted, this doesn't apply. + join_rules_event = await self._store.get_event(join_rules_event_id) + if join_rules_event.content.get("join_rule") != JoinRules.MSC3083_RESTRICTED: + return True + + # If allowed is of the wrong form, then only allow invited users. + allowed_spaces = join_rules_event.content.get("allow", []) + if not isinstance(allowed_spaces, list): + return False + + # Get the list of joined rooms and see if there's an overlap. + joined_rooms = await self._store.get_rooms_for_user(user_id) + + # Pull out the other room IDs, invalid data gets filtered. + for space in allowed_spaces: + if not isinstance(space, dict): + continue + + space_id = space.get("space") + if not isinstance(space_id, str): + continue + + # The user was joined to one of the spaces specified, they can join + # this room! + if space_id in joined_rooms: + return True + + # The user was not in any of the required spaces. + return False diff --git a/synapse/handlers/federation.py b/synapse/handlers/federation.py index dbdd7d2db3..9d867aaf4d 100644 --- a/synapse/handlers/federation.py +++ b/synapse/handlers/federation.py @@ -146,6 +146,7 @@ class FederationHandler(BaseHandler): self.is_mine_id = hs.is_mine_id self.spam_checker = hs.get_spam_checker() self.event_creation_handler = hs.get_event_creation_handler() + self._event_auth_handler = hs.get_event_auth_handler() self._message_handler = hs.get_message_handler() self._server_notices_mxid = hs.config.server_notices_mxid self.config = hs.config @@ -1673,8 +1674,40 @@ class FederationHandler(BaseHandler): # would introduce the danger of backwards-compatibility problems. event.internal_metadata.send_on_behalf_of = origin + # Calculate the event context. context = await self.state_handler.compute_event_context(event) - context = await self._auth_and_persist_event(origin, event, context) + + # Get the state before the new event. + prev_state_ids = await context.get_prev_state_ids() + + # Check if the user is already in the room or invited to the room. + user_id = event.state_key + prev_member_event_id = prev_state_ids.get((EventTypes.Member, user_id), None) + newly_joined = True + user_is_invited = False + if prev_member_event_id: + prev_member_event = await self.store.get_event(prev_member_event_id) + newly_joined = prev_member_event.membership != Membership.JOIN + user_is_invited = prev_member_event.membership == Membership.INVITE + + # If the member is not already in the room, and not invited, check if + # they should be allowed access via membership in a space. + if ( + newly_joined + and not user_is_invited + and not await self._event_auth_handler.can_join_without_invite( + prev_state_ids, + event.room_version, + user_id, + ) + ): + raise AuthError( + 403, + "You do not belong to any of the required spaces to join this room.", + ) + + # Persist the event. + await self._auth_and_persist_event(origin, event, context) logger.debug( "on_send_join_request: After _auth_and_persist_event: %s, sigs: %s", @@ -1682,8 +1715,6 @@ class FederationHandler(BaseHandler): event.signatures, ) - prev_state_ids = await context.get_prev_state_ids() - state_ids = list(prev_state_ids.values()) auth_chain = await self.store.get_auth_chain(event.room_id, state_ids) @@ -2006,7 +2037,7 @@ class FederationHandler(BaseHandler): state: Optional[Iterable[EventBase]] = None, auth_events: Optional[MutableStateMap[EventBase]] = None, backfilled: bool = False, - ) -> EventContext: + ) -> None: """ Process an event by performing auth checks and then persisting to the database. @@ -2028,9 +2059,6 @@ class FederationHandler(BaseHandler): event is an outlier), may be the auth events claimed by the remote server. backfilled: True if the event was backfilled. - - Returns: - The event context. """ context = await self._check_event_auth( origin, @@ -2060,8 +2088,6 @@ class FederationHandler(BaseHandler): ) raise - return context - async def _auth_and_persist_events( self, origin: str, diff --git a/synapse/handlers/room_member.py b/synapse/handlers/room_member.py index 2bbfac6471..2c5bada1d8 100644 --- a/synapse/handlers/room_member.py +++ b/synapse/handlers/room_member.py @@ -19,7 +19,7 @@ from http import HTTPStatus from typing import TYPE_CHECKING, Iterable, List, Optional, Tuple from synapse import types -from synapse.api.constants import AccountDataTypes, EventTypes, JoinRules, Membership +from synapse.api.constants import AccountDataTypes, EventTypes, Membership from synapse.api.errors import ( AuthError, Codes, @@ -28,7 +28,6 @@ from synapse.api.errors import ( SynapseError, ) from synapse.api.ratelimiting import Ratelimiter -from synapse.api.room_versions import RoomVersion from synapse.events import EventBase from synapse.events.snapshot import EventContext from synapse.types import JsonDict, Requester, RoomAlias, RoomID, StateMap, UserID @@ -64,6 +63,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta): self.profile_handler = hs.get_profile_handler() self.event_creation_handler = hs.get_event_creation_handler() self.account_data_handler = hs.get_account_data_handler() + self.event_auth_handler = hs.get_event_auth_handler() self.member_linearizer = Linearizer(name="member") @@ -178,62 +178,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta): await self._invites_per_user_limiter.ratelimit(requester, invitee_user_id) - async def _can_join_without_invite( - self, state_ids: StateMap[str], room_version: RoomVersion, user_id: str - ) -> bool: - """ - Check whether a user can join a room without an invite. - - When joining a room with restricted joined rules (as defined in MSC3083), - the membership of spaces must be checked during join. - - Args: - state_ids: The state of the room as it currently is. - room_version: The room version of the room being joined. - user_id: The user joining the room. - - Returns: - True if the user can join the room, false otherwise. - """ - # This only applies to room versions which support the new join rule. - if not room_version.msc3083_join_rules: - return True - - # If there's no join rule, then it defaults to public (so this doesn't apply). - join_rules_event_id = state_ids.get((EventTypes.JoinRules, ""), None) - if not join_rules_event_id: - return True - - # If the join rule is not restricted, this doesn't apply. - join_rules_event = await self.store.get_event(join_rules_event_id) - if join_rules_event.content.get("join_rule") != JoinRules.MSC3083_RESTRICTED: - return True - - # If allowed is of the wrong form, then only allow invited users. - allowed_spaces = join_rules_event.content.get("allow", []) - if not isinstance(allowed_spaces, list): - return False - - # Get the list of joined rooms and see if there's an overlap. - joined_rooms = await self.store.get_rooms_for_user(user_id) - - # Pull out the other room IDs, invalid data gets filtered. - for space in allowed_spaces: - if not isinstance(space, dict): - continue - - space_id = space.get("space") - if not isinstance(space_id, str): - continue - - # The user was joined to one of the spaces specified, they can join - # this room! - if space_id in joined_rooms: - return True - - # The user was not in any of the required spaces. - return False - async def _local_membership_update( self, requester: Requester, @@ -302,7 +246,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta): if ( newly_joined and not user_is_invited - and not await self._can_join_without_invite( + and not await self.event_auth_handler.can_join_without_invite( prev_state_ids, event.room_version, user_id ) ): diff --git a/synapse/server.py b/synapse/server.py index 59ae91b503..67598fffe3 100644 --- a/synapse/server.py +++ b/synapse/server.py @@ -77,6 +77,7 @@ from synapse.handlers.devicemessage import DeviceMessageHandler from synapse.handlers.directory import DirectoryHandler from synapse.handlers.e2e_keys import E2eKeysHandler from synapse.handlers.e2e_room_keys import E2eRoomKeysHandler +from synapse.handlers.event_auth import EventAuthHandler from synapse.handlers.events import EventHandler, EventStreamHandler from synapse.handlers.federation import FederationHandler from synapse.handlers.groups_local import GroupsLocalHandler, GroupsLocalWorkerHandler @@ -746,6 +747,10 @@ class HomeServer(metaclass=abc.ABCMeta): def get_space_summary_handler(self) -> SpaceSummaryHandler: return SpaceSummaryHandler(self) + @cache_in_self + def get_event_auth_handler(self) -> EventAuthHandler: + return EventAuthHandler(self) + @cache_in_self def get_external_cache(self) -> ExternalCache: return ExternalCache(self) -- cgit 1.5.1 From 9d25a0ae65ce8728d0fda1eebaf0b469316f84d7 Mon Sep 17 00:00:00 2001 From: Erik Johnston Date: Fri, 23 Apr 2021 12:21:55 +0100 Subject: Split presence out of master (#9820) --- changelog.d/9820.feature | 1 + scripts/synapse_port_db | 7 +- synapse/app/generic_worker.py | 31 +------- synapse/config/workers.py | 27 ++++++- synapse/handlers/presence.py | 56 ++++++++----- synapse/replication/http/_base.py | 5 +- synapse/replication/slave/storage/presence.py | 50 ------------ synapse/replication/tcp/handler.py | 18 ++++- synapse/replication/tcp/streams/_base.py | 17 ++-- synapse/rest/client/v1/presence.py | 7 +- synapse/server.py | 6 +- synapse/storage/databases/main/__init__.py | 47 +---------- synapse/storage/databases/main/presence.py | 92 +++++++++++++++++++++- .../schema/delta/59/12presence_stream_instance.sql | 18 +++++ .../59/12presence_stream_instance_seq.sql.postgres | 20 +++++ tests/app/test_frontend_proxy.py | 83 ------------------- tests/rest/client/v1/test_presence.py | 5 +- 17 files changed, 245 insertions(+), 245 deletions(-) create mode 100644 changelog.d/9820.feature delete mode 100644 synapse/replication/slave/storage/presence.py create mode 100644 synapse/storage/databases/main/schema/delta/59/12presence_stream_instance.sql create mode 100644 synapse/storage/databases/main/schema/delta/59/12presence_stream_instance_seq.sql.postgres delete mode 100644 tests/app/test_frontend_proxy.py diff --git a/changelog.d/9820.feature b/changelog.d/9820.feature new file mode 100644 index 0000000000..f56b0bb3bd --- /dev/null +++ b/changelog.d/9820.feature @@ -0,0 +1 @@ +Add experimental support for handling presence on a worker. diff --git a/scripts/synapse_port_db b/scripts/synapse_port_db index b7c1ffc956..f0c93d5226 100755 --- a/scripts/synapse_port_db +++ b/scripts/synapse_port_db @@ -634,8 +634,11 @@ class Porter(object): "device_inbox_sequence", ("device_inbox", "device_federation_outbox") ) await self._setup_sequence( - "account_data_sequence", ("room_account_data", "room_tags_revisions", "account_data")) - await self._setup_sequence("receipts_sequence", ("receipts_linearized", )) + "account_data_sequence", + ("room_account_data", "room_tags_revisions", "account_data"), + ) + await self._setup_sequence("receipts_sequence", ("receipts_linearized",)) + await self._setup_sequence("presence_stream_sequence", ("presence_stream",)) await self._setup_auth_chain_sequence() # Step 3. Get tables. diff --git a/synapse/app/generic_worker.py b/synapse/app/generic_worker.py index 26c458dbb6..7b2ac3ca64 100644 --- a/synapse/app/generic_worker.py +++ b/synapse/app/generic_worker.py @@ -55,7 +55,6 @@ from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.filtering import SlavedFilteringStore from synapse.replication.slave.storage.groups import SlavedGroupServerStore from synapse.replication.slave.storage.keys import SlavedKeyStore -from synapse.replication.slave.storage.presence import SlavedPresenceStore from synapse.replication.slave.storage.profile import SlavedProfileStore from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore from synapse.replication.slave.storage.pushers import SlavedPusherStore @@ -64,7 +63,7 @@ from synapse.replication.slave.storage.registration import SlavedRegistrationSto from synapse.replication.slave.storage.room import RoomStore from synapse.replication.slave.storage.transactions import SlavedTransactionStore from synapse.rest.admin import register_servlets_for_media_repo -from synapse.rest.client.v1 import events, login, room +from synapse.rest.client.v1 import events, login, presence, room from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet from synapse.rest.client.v1.profile import ( ProfileAvatarURLRestServlet, @@ -110,6 +109,7 @@ from synapse.storage.databases.main.metrics import ServerMetricsStore from synapse.storage.databases.main.monthly_active_users import ( MonthlyActiveUsersWorkerStore, ) +from synapse.storage.databases.main.presence import PresenceStore from synapse.storage.databases.main.search import SearchWorkerStore from synapse.storage.databases.main.stats import StatsStore from synapse.storage.databases.main.transactions import TransactionWorkerStore @@ -121,26 +121,6 @@ from synapse.util.versionstring import get_version_string logger = logging.getLogger("synapse.app.generic_worker") -class PresenceStatusStubServlet(RestServlet): - """If presence is disabled this servlet can be used to stub out setting - presence status. - """ - - PATTERNS = client_patterns("/presence/(?P[^/]*)/status") - - def __init__(self, hs): - super().__init__() - self.auth = hs.get_auth() - - async def on_GET(self, request, user_id): - await self.auth.get_user_by_req(request) - return 200, {"presence": "offline"} - - async def on_PUT(self, request, user_id): - await self.auth.get_user_by_req(request) - return 200, {} - - class KeyUploadServlet(RestServlet): """An implementation of the `KeyUploadServlet` that responds to read only requests, but otherwise proxies through to the master instance. @@ -241,6 +221,7 @@ class GenericWorkerSlavedStore( StatsStore, UIAuthWorkerStore, EndToEndRoomKeyStore, + PresenceStore, SlavedDeviceInboxStore, SlavedDeviceStore, SlavedReceiptsStore, @@ -259,7 +240,6 @@ class GenericWorkerSlavedStore( SlavedTransactionStore, SlavedProfileStore, SlavedClientIpStore, - SlavedPresenceStore, SlavedFilteringStore, MonthlyActiveUsersWorkerStore, MediaRepositoryStore, @@ -327,10 +307,7 @@ class GenericWorkerServer(HomeServer): user_directory.register_servlets(self, resource) - # If presence is disabled, use the stub servlet that does - # not allow sending presence - if not self.config.use_presence: - PresenceStatusStubServlet(self).register(resource) + presence.register_servlets(self, resource) groups.register_servlets(self, resource) diff --git a/synapse/config/workers.py b/synapse/config/workers.py index b2540163d1..462630201d 100644 --- a/synapse/config/workers.py +++ b/synapse/config/workers.py @@ -64,6 +64,14 @@ class WriterLocations: Attributes: events: The instances that write to the event and backfill streams. typing: The instance that writes to the typing stream. + to_device: The instances that write to the to_device stream. Currently + can only be a single instance. + account_data: The instances that write to the account data streams. Currently + can only be a single instance. + receipts: The instances that write to the receipts stream. Currently + can only be a single instance. + presence: The instances that write to the presence stream. Currently + can only be a single instance. """ events = attr.ib( @@ -85,6 +93,11 @@ class WriterLocations: type=List[str], converter=_instance_to_list_converter, ) + presence = attr.ib( + default=["master"], + type=List[str], + converter=_instance_to_list_converter, + ) class WorkerConfig(Config): @@ -188,7 +201,14 @@ class WorkerConfig(Config): # Check that the configured writers for events and typing also appears in # `instance_map`. - for stream in ("events", "typing", "to_device", "account_data", "receipts"): + for stream in ( + "events", + "typing", + "to_device", + "account_data", + "receipts", + "presence", + ): instances = _instance_to_list_converter(getattr(self.writers, stream)) for instance in instances: if instance != "master" and instance not in self.instance_map: @@ -215,6 +235,11 @@ class WorkerConfig(Config): if len(self.writers.events) == 0: raise ConfigError("Must specify at least one instance to handle `events`.") + if len(self.writers.presence) != 1: + raise ConfigError( + "Must only specify one instance to handle `presence` messages." + ) + self.events_shard_config = RoutableShardedWorkerHandlingConfig( self.writers.events ) diff --git a/synapse/handlers/presence.py b/synapse/handlers/presence.py index 7fd28ffa54..9938be3821 100644 --- a/synapse/handlers/presence.py +++ b/synapse/handlers/presence.py @@ -122,7 +122,8 @@ assert LAST_ACTIVE_GRANULARITY < IDLE_TIMER class BasePresenceHandler(abc.ABC): - """Parts of the PresenceHandler that are shared between workers and master""" + """Parts of the PresenceHandler that are shared between workers and presence + writer""" def __init__(self, hs: "HomeServer"): self.clock = hs.get_clock() @@ -309,8 +310,16 @@ class WorkerPresenceHandler(BasePresenceHandler): super().__init__(hs) self.hs = hs + self._presence_writer_instance = hs.config.worker.writers.presence[0] + self._presence_enabled = hs.config.use_presence + # Route presence EDUs to the right worker + hs.get_federation_registry().register_instances_for_edu( + "m.presence", + hs.config.worker.writers.presence, + ) + # The number of ongoing syncs on this process, by user id. # Empty if _presence_enabled is false. self._user_to_num_current_syncs = {} # type: Dict[str, int] @@ -318,8 +327,8 @@ class WorkerPresenceHandler(BasePresenceHandler): self.notifier = hs.get_notifier() self.instance_id = hs.get_instance_id() - # user_id -> last_sync_ms. Lists the users that have stopped syncing - # but we haven't notified the master of that yet + # user_id -> last_sync_ms. Lists the users that have stopped syncing but + # we haven't notified the presence writer of that yet self.users_going_offline = {} self._bump_active_client = ReplicationBumpPresenceActiveTime.make_client(hs) @@ -352,22 +361,23 @@ class WorkerPresenceHandler(BasePresenceHandler): ) def mark_as_coming_online(self, user_id): - """A user has started syncing. Send a UserSync to the master, unless they - had recently stopped syncing. + """A user has started syncing. Send a UserSync to the presence writer, + unless they had recently stopped syncing. Args: user_id (str) """ going_offline = self.users_going_offline.pop(user_id, None) if not going_offline: - # Safe to skip because we haven't yet told the master they were offline + # Safe to skip because we haven't yet told the presence writer they + # were offline self.send_user_sync(user_id, True, self.clock.time_msec()) def mark_as_going_offline(self, user_id): - """A user has stopped syncing. We wait before notifying the master as - its likely they'll come back soon. This allows us to avoid sending - a stopped syncing immediately followed by a started syncing notification - to the master + """A user has stopped syncing. We wait before notifying the presence + writer as its likely they'll come back soon. This allows us to avoid + sending a stopped syncing immediately followed by a started syncing + notification to the presence writer Args: user_id (str) @@ -375,8 +385,8 @@ class WorkerPresenceHandler(BasePresenceHandler): self.users_going_offline[user_id] = self.clock.time_msec() def send_stop_syncing(self): - """Check if there are any users who have stopped syncing a while ago - and haven't come back yet. If there are poke the master about them. + """Check if there are any users who have stopped syncing a while ago and + haven't come back yet. If there are poke the presence writer about them. """ now = self.clock.time_msec() for user_id, last_sync_ms in list(self.users_going_offline.items()): @@ -492,9 +502,12 @@ class WorkerPresenceHandler(BasePresenceHandler): if not self.hs.config.use_presence: return - # Proxy request to master + # Proxy request to instance that writes presence await self._set_state_client( - user_id=user_id, state=state, ignore_status_msg=ignore_status_msg + instance_name=self._presence_writer_instance, + user_id=user_id, + state=state, + ignore_status_msg=ignore_status_msg, ) async def bump_presence_active_time(self, user): @@ -505,9 +518,11 @@ class WorkerPresenceHandler(BasePresenceHandler): if not self.hs.config.use_presence: return - # Proxy request to master + # Proxy request to instance that writes presence user_id = user.to_string() - await self._bump_active_client(user_id=user_id) + await self._bump_active_client( + instance_name=self._presence_writer_instance, user_id=user_id + ) class PresenceHandler(BasePresenceHandler): @@ -1909,7 +1924,7 @@ class PresenceFederationQueue: self._queue_presence_updates = True # Whether this instance is a presence writer. - self._presence_writer = hs.config.worker.worker_app is None + self._presence_writer = self._instance_name in hs.config.worker.writers.presence # The FederationSender instance, if this process sends federation traffic directly. self._federation = None @@ -1957,7 +1972,7 @@ class PresenceFederationQueue: Will forward to the local federation sender (if there is one) and queue to send over replication (if there are other federation sender instances.). - Must only be called on the master process. + Must only be called on the presence writer process. """ # This should only be called on a presence writer. @@ -2003,10 +2018,11 @@ class PresenceFederationQueue: We return rows in the form of `(destination, user_id)` to keep the size of each row bounded (rather than returning the sets in a row). - On workers this will query the master process via HTTP replication. + On workers this will query the presence writer process via HTTP replication. """ if instance_name != self._instance_name: - # If not local we query over http replication from the master + # If not local we query over http replication from the presence + # writer result = await self._repl_client( instance_name=instance_name, stream_name=PresenceFederationStream.NAME, diff --git a/synapse/replication/http/_base.py b/synapse/replication/http/_base.py index ece03467b5..5685cf2121 100644 --- a/synapse/replication/http/_base.py +++ b/synapse/replication/http/_base.py @@ -158,7 +158,10 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta): def make_client(cls, hs): """Create a client that makes requests. - Returns a callable that accepts the same parameters as `_serialize_payload`. + Returns a callable that accepts the same parameters as + `_serialize_payload`, and also accepts an optional `instance_name` + parameter to specify which instance to hit (the instance must be in + the `instance_map` config). """ clock = hs.get_clock() client = hs.get_simple_http_client() diff --git a/synapse/replication/slave/storage/presence.py b/synapse/replication/slave/storage/presence.py deleted file mode 100644 index 57327d910d..0000000000 --- a/synapse/replication/slave/storage/presence.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright 2016 OpenMarket Ltd -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from synapse.replication.tcp.streams import PresenceStream -from synapse.storage import DataStore -from synapse.storage.database import DatabasePool -from synapse.storage.databases.main.presence import PresenceStore -from synapse.util.caches.stream_change_cache import StreamChangeCache - -from ._base import BaseSlavedStore -from ._slaved_id_tracker import SlavedIdTracker - - -class SlavedPresenceStore(BaseSlavedStore): - def __init__(self, database: DatabasePool, db_conn, hs): - super().__init__(database, db_conn, hs) - self._presence_id_gen = SlavedIdTracker(db_conn, "presence_stream", "stream_id") - - self._presence_on_startup = self._get_active_presence(db_conn) # type: ignore - - self.presence_stream_cache = StreamChangeCache( - "PresenceStreamChangeCache", self._presence_id_gen.get_current_token() - ) - - _get_active_presence = DataStore._get_active_presence - take_presence_startup_info = DataStore.take_presence_startup_info - _get_presence_for_user = PresenceStore.__dict__["_get_presence_for_user"] - get_presence_for_users = PresenceStore.__dict__["get_presence_for_users"] - - def get_current_presence_token(self): - return self._presence_id_gen.get_current_token() - - def process_replication_rows(self, stream_name, instance_name, token, rows): - if stream_name == PresenceStream.NAME: - self._presence_id_gen.advance(instance_name, token) - for row in rows: - self.presence_stream_cache.entity_has_changed(row.user_id, token) - self._get_presence_for_user.invalidate((row.user_id,)) - return super().process_replication_rows(stream_name, instance_name, token, rows) diff --git a/synapse/replication/tcp/handler.py b/synapse/replication/tcp/handler.py index 2ce1b9f222..7ced4c543c 100644 --- a/synapse/replication/tcp/handler.py +++ b/synapse/replication/tcp/handler.py @@ -55,6 +55,8 @@ from synapse.replication.tcp.streams import ( CachesStream, EventsStream, FederationStream, + PresenceFederationStream, + PresenceStream, ReceiptsStream, Stream, TagAccountDataStream, @@ -99,6 +101,10 @@ class ReplicationCommandHandler: self._instance_id = hs.get_instance_id() self._instance_name = hs.get_instance_name() + self._is_presence_writer = ( + hs.get_instance_name() in hs.config.worker.writers.presence + ) + self._streams = { stream.NAME: stream(hs) for stream in STREAMS_MAP.values() } # type: Dict[str, Stream] @@ -153,6 +159,14 @@ class ReplicationCommandHandler: continue + if isinstance(stream, (PresenceStream, PresenceFederationStream)): + # Only add PresenceStream as a source on the instance in charge + # of presence. + if self._is_presence_writer: + self._streams_to_replicate.append(stream) + + continue + # Only add any other streams if we're on master. if hs.config.worker_app is not None: continue @@ -350,7 +364,7 @@ class ReplicationCommandHandler: ) -> Optional[Awaitable[None]]: user_sync_counter.inc() - if self._is_master: + if self._is_presence_writer: return self._presence_handler.update_external_syncs_row( cmd.instance_id, cmd.user_id, cmd.is_syncing, cmd.last_sync_ms ) @@ -360,7 +374,7 @@ class ReplicationCommandHandler: def on_CLEAR_USER_SYNC( self, conn: IReplicationConnection, cmd: ClearUserSyncsCommand ) -> Optional[Awaitable[None]]: - if self._is_master: + if self._is_presence_writer: return self._presence_handler.update_external_syncs_clear(cmd.instance_id) else: return None diff --git a/synapse/replication/tcp/streams/_base.py b/synapse/replication/tcp/streams/_base.py index 9d75a89f1c..b03824925a 100644 --- a/synapse/replication/tcp/streams/_base.py +++ b/synapse/replication/tcp/streams/_base.py @@ -272,15 +272,22 @@ class PresenceStream(Stream): NAME = "presence" ROW_TYPE = PresenceStreamRow - def __init__(self, hs): + def __init__(self, hs: "HomeServer"): store = hs.get_datastore() - if hs.config.worker_app is None: - # on the master, query the presence handler + if hs.get_instance_name() in hs.config.worker.writers.presence: + # on the presence writer, query the presence handler presence_handler = hs.get_presence_handler() - update_function = presence_handler.get_all_presence_updates + + from synapse.handlers.presence import PresenceHandler + + assert isinstance(presence_handler, PresenceHandler) + + update_function = ( + presence_handler.get_all_presence_updates + ) # type: UpdateFunction else: - # Query master process + # Query presence writer process update_function = make_http_update_function(hs, self.NAME) super().__init__( diff --git a/synapse/rest/client/v1/presence.py b/synapse/rest/client/v1/presence.py index c232484f29..2b24fe5aa6 100644 --- a/synapse/rest/client/v1/presence.py +++ b/synapse/rest/client/v1/presence.py @@ -35,10 +35,15 @@ class PresenceStatusRestServlet(RestServlet): self.clock = hs.get_clock() self.auth = hs.get_auth() + self._use_presence = hs.config.server.use_presence + async def on_GET(self, request, user_id): requester = await self.auth.get_user_by_req(request) user = UserID.from_string(user_id) + if not self._use_presence: + return 200, {"presence": "offline"} + if requester.user != user: allowed = await self.presence_handler.is_visible( observed_user=user, observer_user=requester.user @@ -80,7 +85,7 @@ class PresenceStatusRestServlet(RestServlet): except Exception: raise SynapseError(400, "Unable to parse state") - if self.hs.config.use_presence: + if self._use_presence: await self.presence_handler.set_state(user, state) return 200, {} diff --git a/synapse/server.py b/synapse/server.py index 67598fffe3..8c147be2b3 100644 --- a/synapse/server.py +++ b/synapse/server.py @@ -418,10 +418,10 @@ class HomeServer(metaclass=abc.ABCMeta): @cache_in_self def get_presence_handler(self) -> BasePresenceHandler: - if self.config.worker_app: - return WorkerPresenceHandler(self) - else: + if self.get_instance_name() in self.config.worker.writers.presence: return PresenceHandler(self) + else: + return WorkerPresenceHandler(self) @cache_in_self def get_typing_writer_handler(self) -> TypingWriterHandler: diff --git a/synapse/storage/databases/main/__init__.py b/synapse/storage/databases/main/__init__.py index 5c50f5f950..49c7606d51 100644 --- a/synapse/storage/databases/main/__init__.py +++ b/synapse/storage/databases/main/__init__.py @@ -17,7 +17,6 @@ import logging from typing import List, Optional, Tuple -from synapse.api.constants import PresenceState from synapse.config.homeserver import HomeServerConfig from synapse.storage.database import DatabasePool from synapse.storage.databases.main.stats import UserSortOrder @@ -51,7 +50,7 @@ from .media_repository import MediaRepositoryStore from .metrics import ServerMetricsStore from .monthly_active_users import MonthlyActiveUsersStore from .openid import OpenIdStore -from .presence import PresenceStore, UserPresenceState +from .presence import PresenceStore from .profile import ProfileStore from .purge_events import PurgeEventsStore from .push_rule import PushRuleStore @@ -126,9 +125,6 @@ class DataStore( self._clock = hs.get_clock() self.database_engine = database.engine - self._presence_id_gen = StreamIdGenerator( - db_conn, "presence_stream", "stream_id" - ) self._public_room_id_gen = StreamIdGenerator( db_conn, "public_room_list_stream", "stream_id" ) @@ -177,21 +173,6 @@ class DataStore( super().__init__(database, db_conn, hs) - self._presence_on_startup = self._get_active_presence(db_conn) - - presence_cache_prefill, min_presence_val = self.db_pool.get_cache_dict( - db_conn, - "presence_stream", - entity_column="user_id", - stream_column="stream_id", - max_value=self._presence_id_gen.get_current_token(), - ) - self.presence_stream_cache = StreamChangeCache( - "PresenceStreamChangeCache", - min_presence_val, - prefilled_cache=presence_cache_prefill, - ) - device_list_max = self._device_list_id_gen.get_current_token() self._device_list_stream_cache = StreamChangeCache( "DeviceListStreamChangeCache", device_list_max @@ -238,32 +219,6 @@ class DataStore( def get_device_stream_token(self) -> int: return self._device_list_id_gen.get_current_token() - def take_presence_startup_info(self): - active_on_startup = self._presence_on_startup - self._presence_on_startup = None - return active_on_startup - - def _get_active_presence(self, db_conn): - """Fetch non-offline presence from the database so that we can register - the appropriate time outs. - """ - - sql = ( - "SELECT user_id, state, last_active_ts, last_federation_update_ts," - " last_user_sync_ts, status_msg, currently_active FROM presence_stream" - " WHERE state != ?" - ) - - txn = db_conn.cursor() - txn.execute(sql, (PresenceState.OFFLINE,)) - rows = self.db_pool.cursor_to_dict(txn) - txn.close() - - for row in rows: - row["currently_active"] = bool(row["currently_active"]) - - return [UserPresenceState(**row) for row in rows] - async def get_users(self) -> List[JsonDict]: """Function to retrieve a list of users in users table. diff --git a/synapse/storage/databases/main/presence.py b/synapse/storage/databases/main/presence.py index c207d917b1..db22fab23e 100644 --- a/synapse/storage/databases/main/presence.py +++ b/synapse/storage/databases/main/presence.py @@ -12,16 +12,69 @@ # See the License for the specific language governing permissions and # limitations under the License. -from typing import Dict, List, Tuple +from typing import TYPE_CHECKING, Dict, List, Tuple -from synapse.api.presence import UserPresenceState +from synapse.api.presence import PresenceState, UserPresenceState +from synapse.replication.tcp.streams import PresenceStream from synapse.storage._base import SQLBaseStore, make_in_list_sql_clause +from synapse.storage.database import DatabasePool +from synapse.storage.engines import PostgresEngine +from synapse.storage.types import Connection +from synapse.storage.util.id_generators import MultiWriterIdGenerator, StreamIdGenerator from synapse.util.caches.descriptors import cached, cachedList +from synapse.util.caches.stream_change_cache import StreamChangeCache from synapse.util.iterutils import batch_iter +if TYPE_CHECKING: + from synapse.server import HomeServer + class PresenceStore(SQLBaseStore): + def __init__( + self, + database: DatabasePool, + db_conn: Connection, + hs: "HomeServer", + ): + super().__init__(database, db_conn, hs) + + self._can_persist_presence = ( + hs.get_instance_name() in hs.config.worker.writers.presence + ) + + if isinstance(database.engine, PostgresEngine): + self._presence_id_gen = MultiWriterIdGenerator( + db_conn=db_conn, + db=database, + stream_name="presence_stream", + instance_name=self._instance_name, + tables=[("presence_stream", "instance_name", "stream_id")], + sequence_name="presence_stream_sequence", + writers=hs.config.worker.writers.to_device, + ) + else: + self._presence_id_gen = StreamIdGenerator( + db_conn, "presence_stream", "stream_id" + ) + + self._presence_on_startup = self._get_active_presence(db_conn) + + presence_cache_prefill, min_presence_val = self.db_pool.get_cache_dict( + db_conn, + "presence_stream", + entity_column="user_id", + stream_column="stream_id", + max_value=self._presence_id_gen.get_current_token(), + ) + self.presence_stream_cache = StreamChangeCache( + "PresenceStreamChangeCache", + min_presence_val, + prefilled_cache=presence_cache_prefill, + ) + async def update_presence(self, presence_states): + assert self._can_persist_presence + stream_ordering_manager = self._presence_id_gen.get_next_mult( len(presence_states) ) @@ -57,6 +110,7 @@ class PresenceStore(SQLBaseStore): "last_user_sync_ts": state.last_user_sync_ts, "status_msg": state.status_msg, "currently_active": state.currently_active, + "instance_name": self._instance_name, } for stream_id, state in zip(stream_orderings, presence_states) ], @@ -216,3 +270,37 @@ class PresenceStore(SQLBaseStore): def get_current_presence_token(self): return self._presence_id_gen.get_current_token() + + def _get_active_presence(self, db_conn: Connection): + """Fetch non-offline presence from the database so that we can register + the appropriate time outs. + """ + + sql = ( + "SELECT user_id, state, last_active_ts, last_federation_update_ts," + " last_user_sync_ts, status_msg, currently_active FROM presence_stream" + " WHERE state != ?" + ) + + txn = db_conn.cursor() + txn.execute(sql, (PresenceState.OFFLINE,)) + rows = self.db_pool.cursor_to_dict(txn) + txn.close() + + for row in rows: + row["currently_active"] = bool(row["currently_active"]) + + return [UserPresenceState(**row) for row in rows] + + def take_presence_startup_info(self): + active_on_startup = self._presence_on_startup + self._presence_on_startup = None + return active_on_startup + + def process_replication_rows(self, stream_name, instance_name, token, rows): + if stream_name == PresenceStream.NAME: + self._presence_id_gen.advance(instance_name, token) + for row in rows: + self.presence_stream_cache.entity_has_changed(row.user_id, token) + self._get_presence_for_user.invalidate((row.user_id,)) + return super().process_replication_rows(stream_name, instance_name, token, rows) diff --git a/synapse/storage/databases/main/schema/delta/59/12presence_stream_instance.sql b/synapse/storage/databases/main/schema/delta/59/12presence_stream_instance.sql new file mode 100644 index 0000000000..b6ba0bda1a --- /dev/null +++ b/synapse/storage/databases/main/schema/delta/59/12presence_stream_instance.sql @@ -0,0 +1,18 @@ +/* Copyright 2021 The Matrix.org Foundation C.I.C + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +-- Add a column to specify which instance wrote the row. Historic rows have +-- `NULL`, which indicates that the master instance wrote them. +ALTER TABLE presence_stream ADD COLUMN instance_name TEXT; diff --git a/synapse/storage/databases/main/schema/delta/59/12presence_stream_instance_seq.sql.postgres b/synapse/storage/databases/main/schema/delta/59/12presence_stream_instance_seq.sql.postgres new file mode 100644 index 0000000000..02b182adf9 --- /dev/null +++ b/synapse/storage/databases/main/schema/delta/59/12presence_stream_instance_seq.sql.postgres @@ -0,0 +1,20 @@ +/* Copyright 2021 The Matrix.org Foundation C.I.C + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +CREATE SEQUENCE IF NOT EXISTS presence_stream_sequence; + +SELECT setval('presence_stream_sequence', ( + SELECT COALESCE(MAX(stream_id), 1) FROM presence_stream +)); diff --git a/tests/app/test_frontend_proxy.py b/tests/app/test_frontend_proxy.py deleted file mode 100644 index 3d45da38ab..0000000000 --- a/tests/app/test_frontend_proxy.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright 2018 New Vector Ltd -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from synapse.app.generic_worker import GenericWorkerServer - -from tests.server import make_request -from tests.unittest import HomeserverTestCase - - -class FrontendProxyTests(HomeserverTestCase): - def make_homeserver(self, reactor, clock): - - hs = self.setup_test_homeserver( - federation_http_client=None, homeserver_to_use=GenericWorkerServer - ) - - return hs - - def default_config(self): - c = super().default_config() - c["worker_app"] = "synapse.app.frontend_proxy" - - c["worker_listeners"] = [ - { - "type": "http", - "port": 8080, - "bind_addresses": ["0.0.0.0"], - "resources": [{"names": ["client"]}], - } - ] - - return c - - def test_listen_http_with_presence_enabled(self): - """ - When presence is on, the stub servlet will not register. - """ - # Presence is on - self.hs.config.use_presence = True - - # Listen with the config - self.hs._listen_http(self.hs.config.worker.worker_listeners[0]) - - # Grab the resource from the site that was told to listen - self.assertEqual(len(self.reactor.tcpServers), 1) - site = self.reactor.tcpServers[0][1] - - channel = make_request(self.reactor, site, "PUT", "presence/a/status") - - # 400 + unrecognised, because nothing is registered - self.assertEqual(channel.code, 400) - self.assertEqual(channel.json_body["errcode"], "M_UNRECOGNIZED") - - def test_listen_http_with_presence_disabled(self): - """ - When presence is off, the stub servlet will register. - """ - # Presence is off - self.hs.config.use_presence = False - - # Listen with the config - self.hs._listen_http(self.hs.config.worker.worker_listeners[0]) - - # Grab the resource from the site that was told to listen - self.assertEqual(len(self.reactor.tcpServers), 1) - site = self.reactor.tcpServers[0][1] - - channel = make_request(self.reactor, site, "PUT", "presence/a/status") - - # 401, because the stub servlet still checks authentication - self.assertEqual(channel.code, 401) - self.assertEqual(channel.json_body["errcode"], "M_MISSING_TOKEN") diff --git a/tests/rest/client/v1/test_presence.py b/tests/rest/client/v1/test_presence.py index 3a050659ca..409f3949dc 100644 --- a/tests/rest/client/v1/test_presence.py +++ b/tests/rest/client/v1/test_presence.py @@ -16,6 +16,7 @@ from unittest.mock import Mock from twisted.internet import defer +from synapse.handlers.presence import PresenceHandler from synapse.rest.client.v1 import presence from synapse.types import UserID @@ -32,7 +33,7 @@ class PresenceTestCase(unittest.HomeserverTestCase): def make_homeserver(self, reactor, clock): - presence_handler = Mock() + presence_handler = Mock(spec=PresenceHandler) presence_handler.set_state.return_value = defer.succeed(None) hs = self.setup_test_homeserver( @@ -59,12 +60,12 @@ class PresenceTestCase(unittest.HomeserverTestCase): self.assertEqual(channel.code, 200) self.assertEqual(self.hs.get_presence_handler().set_state.call_count, 1) + @unittest.override_config({"use_presence": False}) def test_put_presence_disabled(self): """ PUT to the status endpoint with use_presence disabled will NOT call set_state on the presence handler. """ - self.hs.config.use_presence = False body = {"presence": "here", "status_msg": "beep boop"} channel = self.make_request( -- cgit 1.5.1 From ceaa76970fa0092bbdc35055c6f32dc63dd59960 Mon Sep 17 00:00:00 2001 From: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com> Date: Fri, 23 Apr 2021 13:37:48 +0100 Subject: Remove room and user invite ratelimits in default unit test config (#9871) --- changelog.d/9871.misc | 1 + tests/utils.py | 4 ++++ 2 files changed, 5 insertions(+) create mode 100644 changelog.d/9871.misc diff --git a/changelog.d/9871.misc b/changelog.d/9871.misc new file mode 100644 index 0000000000..b19acfab62 --- /dev/null +++ b/changelog.d/9871.misc @@ -0,0 +1 @@ +Disable invite rate-limiting by default when running the unit tests. \ No newline at end of file diff --git a/tests/utils.py b/tests/utils.py index 63d52b9140..6bd008dcfe 100644 --- a/tests/utils.py +++ b/tests/utils.py @@ -153,6 +153,10 @@ def default_config(name, parse=False): "local": {"per_second": 10000, "burst_count": 10000}, "remote": {"per_second": 10000, "burst_count": 10000}, }, + "rc_invites": { + "per_room": {"per_second": 10000, "burst_count": 10000}, + "per_user": {"per_second": 10000, "burst_count": 10000}, + }, "rc_3pid_validation": {"per_second": 10000, "burst_count": 10000}, "saml2_enabled": False, "public_baseurl": None, -- cgit 1.5.1 From a15c003e5b0bff8bf78a675f3b719d3f25fe8bde Mon Sep 17 00:00:00 2001 From: Erik Johnston Date: Fri, 23 Apr 2021 15:46:29 +0100 Subject: Make DomainSpecificString an attrs class (#9875) --- changelog.d/9875.misc | 1 + synapse/handlers/oidc.py | 5 +++++ synapse/rest/synapse/client/new_user_consent.py | 9 +++++++++ synapse/types.py | 17 +++++++++-------- 4 files changed, 24 insertions(+), 8 deletions(-) create mode 100644 changelog.d/9875.misc diff --git a/changelog.d/9875.misc b/changelog.d/9875.misc new file mode 100644 index 0000000000..9345c0bf45 --- /dev/null +++ b/changelog.d/9875.misc @@ -0,0 +1 @@ +Make `DomainSpecificString` an `attrs` class. diff --git a/synapse/handlers/oidc.py b/synapse/handlers/oidc.py index 45514be50f..1c4a43be0a 100644 --- a/synapse/handlers/oidc.py +++ b/synapse/handlers/oidc.py @@ -957,6 +957,11 @@ class OidcProvider: # and attempt to match it. attributes = await oidc_response_to_user_attributes(failures=0) + if attributes.localpart is None: + # If no localpart is returned then we will generate one, so + # there is no need to search for existing users. + return None + user_id = UserID(attributes.localpart, self._server_name).to_string() users = await self._store.get_users_by_id_case_insensitive(user_id) if users: diff --git a/synapse/rest/synapse/client/new_user_consent.py b/synapse/rest/synapse/client/new_user_consent.py index e5634f9679..488b97b32e 100644 --- a/synapse/rest/synapse/client/new_user_consent.py +++ b/synapse/rest/synapse/client/new_user_consent.py @@ -61,6 +61,15 @@ class NewUserConsentResource(DirectServeHtmlResource): self._sso_handler.render_error(request, "bad_session", e.msg, code=e.code) return + # It should be impossible to get here without having first been through + # the pick-a-username step, which ensures chosen_localpart gets set. + if not session.chosen_localpart: + logger.warning("Session has no user name selected") + self._sso_handler.render_error( + request, "no_user", "No user name has been selected.", code=400 + ) + return + user_id = UserID(session.chosen_localpart, self._server_name) user_profile = { "display_name": session.display_name, diff --git a/synapse/types.py b/synapse/types.py index e19f28d543..e52cd7ffd4 100644 --- a/synapse/types.py +++ b/synapse/types.py @@ -199,9 +199,8 @@ def get_localpart_from_id(string): DS = TypeVar("DS", bound="DomainSpecificString") -class DomainSpecificString( - namedtuple("DomainSpecificString", ("localpart", "domain")), metaclass=abc.ABCMeta -): +@attr.s(slots=True, frozen=True, repr=False) +class DomainSpecificString(metaclass=abc.ABCMeta): """Common base class among ID/name strings that have a local part and a domain name, prefixed with a sigil. @@ -213,11 +212,8 @@ class DomainSpecificString( SIGIL = abc.abstractproperty() # type: str # type: ignore - # Deny iteration because it will bite you if you try to create a singleton - # set by: - # users = set(user) - def __iter__(self): - raise ValueError("Attempted to iterate a %s" % (type(self).__name__,)) + localpart = attr.ib(type=str) + domain = attr.ib(type=str) # Because this class is a namedtuple of strings and booleans, it is deeply # immutable. @@ -272,30 +268,35 @@ class DomainSpecificString( __repr__ = to_string +@attr.s(slots=True, frozen=True, repr=False) class UserID(DomainSpecificString): """Structure representing a user ID.""" SIGIL = "@" +@attr.s(slots=True, frozen=True, repr=False) class RoomAlias(DomainSpecificString): """Structure representing a room name.""" SIGIL = "#" +@attr.s(slots=True, frozen=True, repr=False) class RoomID(DomainSpecificString): """Structure representing a room id. """ SIGIL = "!" +@attr.s(slots=True, frozen=True, repr=False) class EventID(DomainSpecificString): """Structure representing an event id. """ SIGIL = "$" +@attr.s(slots=True, frozen=True, repr=False) class GroupID(DomainSpecificString): """Structure representing a group ID.""" -- cgit 1.5.1 From e83627926fb5373b383129b99a5039e8a2e329af Mon Sep 17 00:00:00 2001 From: Patrick Cloke Date: Fri, 23 Apr 2021 12:02:16 -0400 Subject: Add type hints to auth and auth_blocking. (#9876) --- changelog.d/9876.misc | 1 + synapse/api/auth.py | 78 ++++++++++++++++++++++---------------------- synapse/api/auth_blocking.py | 9 +++-- synapse/event_auth.py | 4 +-- 4 files changed, 48 insertions(+), 44 deletions(-) create mode 100644 changelog.d/9876.misc diff --git a/changelog.d/9876.misc b/changelog.d/9876.misc new file mode 100644 index 0000000000..28390e32e6 --- /dev/null +++ b/changelog.d/9876.misc @@ -0,0 +1 @@ +Add type hints to `synapse.api.auth` and `synapse.api.auth_blocking` modules. diff --git a/synapse/api/auth.py b/synapse/api/auth.py index 2d845d0d5c..efc926d094 100644 --- a/synapse/api/auth.py +++ b/synapse/api/auth.py @@ -12,14 +12,13 @@ # See the License for the specific language governing permissions and # limitations under the License. import logging -from typing import List, Optional, Tuple +from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple import pymacaroons from netaddr import IPAddress from twisted.web.server import Request -import synapse.types from synapse import event_auth from synapse.api.auth_blocking import AuthBlocking from synapse.api.constants import EventTypes, HistoryVisibility, Membership @@ -36,11 +35,14 @@ from synapse.http import get_request_user_agent from synapse.http.site import SynapseRequest from synapse.logging import opentracing as opentracing from synapse.storage.databases.main.registration import TokenLookupResult -from synapse.types import StateMap, UserID +from synapse.types import Requester, StateMap, UserID, create_requester from synapse.util.caches.lrucache import LruCache from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry from synapse.util.metrics import Measure +if TYPE_CHECKING: + from synapse.server import HomeServer + logger = logging.getLogger(__name__) @@ -68,7 +70,7 @@ class Auth: The latter should be moved to synapse.handlers.event_auth.EventAuthHandler. """ - def __init__(self, hs): + def __init__(self, hs: "HomeServer"): self.hs = hs self.clock = hs.get_clock() self.store = hs.get_datastore() @@ -88,13 +90,13 @@ class Auth: async def check_from_context( self, room_version: str, event, context, do_sig_check=True - ): + ) -> None: prev_state_ids = await context.get_prev_state_ids() auth_events_ids = self.compute_auth_events( event, prev_state_ids, for_verification=True ) - auth_events = await self.store.get_events(auth_events_ids) - auth_events = {(e.type, e.state_key): e for e in auth_events.values()} + auth_events_by_id = await self.store.get_events(auth_events_ids) + auth_events = {(e.type, e.state_key): e for e in auth_events_by_id.values()} room_version_obj = KNOWN_ROOM_VERSIONS[room_version] event_auth.check( @@ -151,17 +153,11 @@ class Auth: raise AuthError(403, "User %s not in room %s" % (user_id, room_id)) - async def check_host_in_room(self, room_id, host): + async def check_host_in_room(self, room_id: str, host: str) -> bool: with Measure(self.clock, "check_host_in_room"): - latest_event_ids = await self.store.is_host_joined(room_id, host) - return latest_event_ids - - def can_federate(self, event, auth_events): - creation_event = auth_events.get((EventTypes.Create, "")) + return await self.store.is_host_joined(room_id, host) - return creation_event.content.get("m.federate", True) is True - - def get_public_keys(self, invite_event): + def get_public_keys(self, invite_event: EventBase) -> List[Dict[str, Any]]: return event_auth.get_public_keys(invite_event) async def get_user_by_req( @@ -170,7 +166,7 @@ class Auth: allow_guest: bool = False, rights: str = "access", allow_expired: bool = False, - ) -> synapse.types.Requester: + ) -> Requester: """Get a registered user's ID. Args: @@ -196,7 +192,7 @@ class Auth: access_token = self.get_access_token_from_request(request) user_id, app_service = await self._get_appservice_user_id(request) - if user_id: + if user_id and app_service: if ip_addr and self._track_appservice_user_ips: await self.store.insert_client_ip( user_id=user_id, @@ -206,9 +202,7 @@ class Auth: device_id="dummy-device", # stubbed ) - requester = synapse.types.create_requester( - user_id, app_service=app_service - ) + requester = create_requester(user_id, app_service=app_service) request.requester = user_id opentracing.set_tag("authenticated_entity", user_id) @@ -251,7 +245,7 @@ class Auth: errcode=Codes.GUEST_ACCESS_FORBIDDEN, ) - requester = synapse.types.create_requester( + requester = create_requester( user_info.user_id, token_id, is_guest, @@ -271,7 +265,9 @@ class Auth: except KeyError: raise MissingClientTokenError() - async def _get_appservice_user_id(self, request): + async def _get_appservice_user_id( + self, request: Request + ) -> Tuple[Optional[str], Optional[ApplicationService]]: app_service = self.store.get_app_service_by_token( self.get_access_token_from_request(request) ) @@ -283,6 +279,9 @@ class Auth: if ip_address not in app_service.ip_range_whitelist: return None, None + # This will always be set by the time Twisted calls us. + assert request.args is not None + if b"user_id" not in request.args: return app_service.sender, app_service @@ -387,7 +386,9 @@ class Auth: logger.warning("Invalid macaroon in auth: %s %s", type(e), e) raise InvalidClientTokenError("Invalid macaroon passed.") - def _parse_and_validate_macaroon(self, token, rights="access"): + def _parse_and_validate_macaroon( + self, token: str, rights: str = "access" + ) -> Tuple[str, bool]: """Takes a macaroon and tries to parse and validate it. This is cached if and only if rights == access and there isn't an expiry. @@ -432,15 +433,16 @@ class Auth: return user_id, guest - def validate_macaroon(self, macaroon, type_string, user_id): + def validate_macaroon( + self, macaroon: pymacaroons.Macaroon, type_string: str, user_id: str + ) -> None: """ validate that a Macaroon is understood by and was signed by this server. Args: - macaroon(pymacaroons.Macaroon): The macaroon to validate - type_string(str): The kind of token required (e.g. "access", - "delete_pusher") - user_id (str): The user_id required + macaroon: The macaroon to validate + type_string: The kind of token required (e.g. "access", "delete_pusher") + user_id: The user_id required """ v = pymacaroons.Verifier() @@ -465,9 +467,7 @@ class Auth: if not service: logger.warning("Unrecognised appservice access token.") raise InvalidClientTokenError() - request.requester = synapse.types.create_requester( - service.sender, app_service=service - ) + request.requester = create_requester(service.sender, app_service=service) return service async def is_server_admin(self, user: UserID) -> bool: @@ -519,7 +519,7 @@ class Auth: return auth_ids - async def check_can_change_room_list(self, room_id: str, user: UserID): + async def check_can_change_room_list(self, room_id: str, user: UserID) -> bool: """Determine whether the user is allowed to edit the room's entry in the published room list. @@ -554,11 +554,11 @@ class Auth: return user_level >= send_level @staticmethod - def has_access_token(request: Request): + def has_access_token(request: Request) -> bool: """Checks if the request has an access_token. Returns: - bool: False if no access_token was given, True otherwise. + False if no access_token was given, True otherwise. """ # This will always be set by the time Twisted calls us. assert request.args is not None @@ -568,13 +568,13 @@ class Auth: return bool(query_params) or bool(auth_headers) @staticmethod - def get_access_token_from_request(request: Request): + def get_access_token_from_request(request: Request) -> str: """Extracts the access_token from the request. Args: request: The http request. Returns: - unicode: The access_token + The access_token Raises: MissingClientTokenError: If there isn't a single access_token in the request @@ -649,5 +649,5 @@ class Auth: % (user_id, room_id), ) - def check_auth_blocking(self, *args, **kwargs): - return self._auth_blocking.check_auth_blocking(*args, **kwargs) + async def check_auth_blocking(self, *args, **kwargs) -> None: + await self._auth_blocking.check_auth_blocking(*args, **kwargs) diff --git a/synapse/api/auth_blocking.py b/synapse/api/auth_blocking.py index a8df60cb89..e6bced93d5 100644 --- a/synapse/api/auth_blocking.py +++ b/synapse/api/auth_blocking.py @@ -13,18 +13,21 @@ # limitations under the License. import logging -from typing import Optional +from typing import TYPE_CHECKING, Optional from synapse.api.constants import LimitBlockingTypes, UserTypes from synapse.api.errors import Codes, ResourceLimitError from synapse.config.server import is_threepid_reserved from synapse.types import Requester +if TYPE_CHECKING: + from synapse.server import HomeServer + logger = logging.getLogger(__name__) class AuthBlocking: - def __init__(self, hs): + def __init__(self, hs: "HomeServer"): self.store = hs.get_datastore() self._server_notices_mxid = hs.config.server_notices_mxid @@ -43,7 +46,7 @@ class AuthBlocking: threepid: Optional[dict] = None, user_type: Optional[str] = None, requester: Optional[Requester] = None, - ): + ) -> None: """Checks if the user should be rejected for some external reason, such as monthly active user limiting or global disable flag diff --git a/synapse/event_auth.py b/synapse/event_auth.py index c831d9f73c..afc2bc8267 100644 --- a/synapse/event_auth.py +++ b/synapse/event_auth.py @@ -14,7 +14,7 @@ # limitations under the License. import logging -from typing import List, Optional, Set, Tuple +from typing import Any, Dict, List, Optional, Set, Tuple from canonicaljson import encode_canonical_json from signedjson.key import decode_verify_key_bytes @@ -688,7 +688,7 @@ def _verify_third_party_invite(event: EventBase, auth_events: StateMap[EventBase return False -def get_public_keys(invite_event): +def get_public_keys(invite_event: EventBase) -> List[Dict[str, Any]]: public_keys = [] if "public_key" in invite_event.content: o = {"public_key": invite_event.content["public_key"]} -- cgit 1.5.1 From 59d24c5bef4e05fa7be0cad1f7e63f0a0097374b Mon Sep 17 00:00:00 2001 From: Richard van der Hoff <1389908+richvdh@users.noreply.github.com> Date: Fri, 23 Apr 2021 17:06:47 +0100 Subject: pass a reactor into SynapseSite (#9874) --- changelog.d/9874.misc | 1 + synapse/app/generic_worker.py | 1 + synapse/app/homeserver.py | 25 ++++++++++--------------- synapse/http/site.py | 37 ++++++++++++++++++++++++++++--------- tests/replication/_base.py | 1 + tests/test_server.py | 1 + tests/unittest.py | 1 + 7 files changed, 43 insertions(+), 24 deletions(-) create mode 100644 changelog.d/9874.misc diff --git a/changelog.d/9874.misc b/changelog.d/9874.misc new file mode 100644 index 0000000000..ba1097e65e --- /dev/null +++ b/changelog.d/9874.misc @@ -0,0 +1 @@ +Pass a reactor into `SynapseSite` to make testing easier. diff --git a/synapse/app/generic_worker.py b/synapse/app/generic_worker.py index 7b2ac3ca64..70e07d0574 100644 --- a/synapse/app/generic_worker.py +++ b/synapse/app/generic_worker.py @@ -367,6 +367,7 @@ class GenericWorkerServer(HomeServer): listener_config, root_resource, self.version_string, + reactor=self.get_reactor(), ), reactor=self.get_reactor(), ) diff --git a/synapse/app/homeserver.py b/synapse/app/homeserver.py index 8be8b520eb..140f6bcdee 100644 --- a/synapse/app/homeserver.py +++ b/synapse/app/homeserver.py @@ -126,19 +126,20 @@ class SynapseHomeServer(HomeServer): else: root_resource = OptionsResource() - root_resource = create_resource_tree(resources, root_resource) + site = SynapseSite( + "synapse.access.%s.%s" % ("https" if tls else "http", site_tag), + site_tag, + listener_config, + create_resource_tree(resources, root_resource), + self.version_string, + reactor=self.get_reactor(), + ) if tls: ports = listen_ssl( bind_addresses, port, - SynapseSite( - "synapse.access.https.%s" % (site_tag,), - site_tag, - listener_config, - root_resource, - self.version_string, - ), + site, self.tls_server_context_factory, reactor=self.get_reactor(), ) @@ -148,13 +149,7 @@ class SynapseHomeServer(HomeServer): ports = listen_tcp( bind_addresses, port, - SynapseSite( - "synapse.access.http.%s" % (site_tag,), - site_tag, - listener_config, - root_resource, - self.version_string, - ), + site, reactor=self.get_reactor(), ) logger.info("Synapse now listening on TCP port %d", port) diff --git a/synapse/http/site.py b/synapse/http/site.py index 32b5e19c09..e911ee4809 100644 --- a/synapse/http/site.py +++ b/synapse/http/site.py @@ -19,8 +19,9 @@ from typing import Optional, Tuple, Type, Union import attr from zope.interface import implementer -from twisted.internet.interfaces import IAddress +from twisted.internet.interfaces import IAddress, IReactorTime from twisted.python.failure import Failure +from twisted.web.resource import IResource from twisted.web.server import Request, Site from synapse.config.server import ListenerConfig @@ -485,21 +486,39 @@ class _XForwardedForAddress: class SynapseSite(Site): """ - Subclass of a twisted http Site that does access logging with python's - standard logging + Synapse-specific twisted http Site + + This does two main things. + + First, it replaces the requestFactory in use so that we build SynapseRequests + instead of regular t.w.server.Requests. All of the constructor params are really + just parameters for SynapseRequest. + + Second, it inhibits the log() method called by Request.finish, since SynapseRequest + does its own logging. """ def __init__( self, - logger_name, - site_tag, + logger_name: str, + site_tag: str, config: ListenerConfig, - resource, + resource: IResource, server_version_string, - *args, - **kwargs, + reactor: IReactorTime, ): - Site.__init__(self, resource, *args, **kwargs) + """ + + Args: + logger_name: The name of the logger to use for access logs. + site_tag: A tag to use for this site - mostly in access logs. + config: Configuration for the HTTP listener corresponding to this site + resource: The base of the resource tree to be used for serving requests on + this site + server_version_string: A string to present for the Server header + reactor: reactor to be used to manage connection timeouts + """ + Site.__init__(self, resource, reactor=reactor) self.site_tag = site_tag diff --git a/tests/replication/_base.py b/tests/replication/_base.py index c9d04aef29..5cf58d8b60 100644 --- a/tests/replication/_base.py +++ b/tests/replication/_base.py @@ -349,6 +349,7 @@ class BaseMultiWorkerStreamTestCase(unittest.HomeserverTestCase): config=worker_hs.config.server.listeners[0], resource=resource, server_version_string="1", + reactor=self.reactor, ) if worker_hs.config.redis.redis_enabled: diff --git a/tests/test_server.py b/tests/test_server.py index 55cde7f62f..45400be367 100644 --- a/tests/test_server.py +++ b/tests/test_server.py @@ -202,6 +202,7 @@ class OptionsResourceTests(unittest.TestCase): parse_listener_def({"type": "http", "port": 0}), self.resource, "1.0", + reactor=self.reactor, ) # render the request and return the channel diff --git a/tests/unittest.py b/tests/unittest.py index ee22a53849..5353e75c7c 100644 --- a/tests/unittest.py +++ b/tests/unittest.py @@ -247,6 +247,7 @@ class HomeserverTestCase(TestCase): config=self.hs.config.server.listeners[0], resource=self.resource, server_version_string="1", + reactor=self.reactor, ) from tests.rest.client.v1.utils import RestHelper -- cgit 1.5.1 From 695b73c861aa26ab591cad3f378214b2666e806e Mon Sep 17 00:00:00 2001 From: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com> Date: Fri, 23 Apr 2021 18:22:47 +0100 Subject: Allow OIDC cookies to work on non-root public baseurls (#9726) Applied a (slightly modified) patch from https://github.com/matrix-org/synapse/issues/9574. As far as I understand this would allow the cookie set during the OIDC flow to work on deployments using public baseurls that do not sit at the URL path root. --- changelog.d/9726.bugfix | 1 + synapse/config/server.py | 8 ++++---- synapse/handlers/oidc.py | 22 +++++++++++++++++----- 3 files changed, 22 insertions(+), 9 deletions(-) create mode 100644 changelog.d/9726.bugfix diff --git a/changelog.d/9726.bugfix b/changelog.d/9726.bugfix new file mode 100644 index 0000000000..4ba0b24327 --- /dev/null +++ b/changelog.d/9726.bugfix @@ -0,0 +1 @@ +Fixes the OIDC SSO flow when using a `public_baseurl` value including a non-root URL path. \ No newline at end of file diff --git a/synapse/config/server.py b/synapse/config/server.py index 02b86b11a5..21ca7b33e3 100644 --- a/synapse/config/server.py +++ b/synapse/config/server.py @@ -235,7 +235,11 @@ class ServerConfig(Config): self.print_pidfile = config.get("print_pidfile") self.user_agent_suffix = config.get("user_agent_suffix") self.use_frozen_dicts = config.get("use_frozen_dicts", False) + self.public_baseurl = config.get("public_baseurl") + if self.public_baseurl is not None: + if self.public_baseurl[-1] != "/": + self.public_baseurl += "/" # Whether to enable user presence. presence_config = config.get("presence") or {} @@ -407,10 +411,6 @@ class ServerConfig(Config): config_path=("federation_ip_range_blacklist",), ) - if self.public_baseurl is not None: - if self.public_baseurl[-1] != "/": - self.public_baseurl += "/" - # (undocumented) option for torturing the worker-mode replication a bit, # for testing. The value defines the number of milliseconds to pause before # sending out any replication updates. diff --git a/synapse/handlers/oidc.py b/synapse/handlers/oidc.py index 1c4a43be0a..ee6e41c0e4 100644 --- a/synapse/handlers/oidc.py +++ b/synapse/handlers/oidc.py @@ -15,7 +15,7 @@ import inspect import logging from typing import TYPE_CHECKING, Dict, Generic, List, Optional, TypeVar, Union -from urllib.parse import urlencode +from urllib.parse import urlencode, urlparse import attr import pymacaroons @@ -68,8 +68,8 @@ logger = logging.getLogger(__name__) # # Here we have the names of the cookies, and the options we use to set them. _SESSION_COOKIES = [ - (b"oidc_session", b"Path=/_synapse/client/oidc; HttpOnly; Secure; SameSite=None"), - (b"oidc_session_no_samesite", b"Path=/_synapse/client/oidc; HttpOnly"), + (b"oidc_session", b"HttpOnly; Secure; SameSite=None"), + (b"oidc_session_no_samesite", b"HttpOnly"), ] #: A token exchanged from the token endpoint, as per RFC6749 sec 5.1. and @@ -279,6 +279,13 @@ class OidcProvider: self._config = provider self._callback_url = hs.config.oidc_callback_url # type: str + # Calculate the prefix for OIDC callback paths based on the public_baseurl. + # We'll insert this into the Path= parameter of any session cookies we set. + public_baseurl_path = urlparse(hs.config.server.public_baseurl).path + self._callback_path_prefix = ( + public_baseurl_path.encode("utf-8") + b"_synapse/client/oidc" + ) + self._oidc_attribute_requirements = provider.attribute_requirements self._scopes = provider.scopes self._user_profile_method = provider.user_profile_method @@ -779,8 +786,13 @@ class OidcProvider: for cookie_name, options in _SESSION_COOKIES: request.cookies.append( - b"%s=%s; Max-Age=3600; %s" - % (cookie_name, cookie.encode("utf-8"), options) + b"%s=%s; Max-Age=3600; Path=%s; %s" + % ( + cookie_name, + cookie.encode("utf-8"), + self._callback_path_prefix, + options, + ) ) metadata = await self.load_metadata() -- cgit 1.5.1 From 84936e22648d3c9f6b76028b08c33f0267f5e3a0 Mon Sep 17 00:00:00 2001 From: Richard van der Hoff <1389908+richvdh@users.noreply.github.com> Date: Fri, 23 Apr 2021 18:40:57 +0100 Subject: Kill off `_PushHTTPChannel`. (#9878) First of all, a fixup to `FakeChannel` which is needed to make it work with the default HTTP channel implementation. Secondly, it looks like we no longer need `_PushHTTPChannel`, because as of #8013, the producer that gets attached to the `HTTPChannel` is now an `IPushProducer`. This is good, because it means we can remove a whole load of test-specific boilerplate which causes variation between tests and production. --- changelog.d/9878.misc | 1 + tests/replication/_base.py | 134 +++++++-------------------------------------- tests/server.py | 6 -- 3 files changed, 20 insertions(+), 121 deletions(-) create mode 100644 changelog.d/9878.misc diff --git a/changelog.d/9878.misc b/changelog.d/9878.misc new file mode 100644 index 0000000000..927876852d --- /dev/null +++ b/changelog.d/9878.misc @@ -0,0 +1 @@ +Remove redundant `_PushHTTPChannel` test class. diff --git a/tests/replication/_base.py b/tests/replication/_base.py index 5cf58d8b60..dc3519ea13 100644 --- a/tests/replication/_base.py +++ b/tests/replication/_base.py @@ -12,14 +12,10 @@ # See the License for the specific language governing permissions and # limitations under the License. import logging -from typing import Any, Callable, Dict, List, Optional, Tuple, Type +from typing import Any, Callable, Dict, List, Optional, Tuple -from twisted.internet.interfaces import IConsumer, IPullProducer, IReactorTime from twisted.internet.protocol import Protocol -from twisted.internet.task import LoopingCall -from twisted.web.http import HTTPChannel from twisted.web.resource import Resource -from twisted.web.server import Request, Site from synapse.app.generic_worker import GenericWorkerServer from synapse.http.server import JsonResource @@ -33,7 +29,6 @@ from synapse.replication.tcp.resource import ( ServerReplicationStreamProtocol, ) from synapse.server import HomeServer -from synapse.util import Clock from tests import unittest from tests.server import FakeTransport @@ -154,7 +149,19 @@ class BaseStreamTestCase(unittest.HomeserverTestCase): client_protocol = client_factory.buildProtocol(None) # Set up the server side protocol - channel = _PushHTTPChannel(self.reactor, SynapseRequest, self.site) + channel = self.site.buildProtocol(None) + + # hook into the channel's request factory so that we can keep a record + # of the requests + requests: List[SynapseRequest] = [] + real_request_factory = channel.requestFactory + + def request_factory(*args, **kwargs): + request = real_request_factory(*args, **kwargs) + requests.append(request) + return request + + channel.requestFactory = request_factory # Connect client to server and vice versa. client_to_server_transport = FakeTransport( @@ -176,7 +183,10 @@ class BaseStreamTestCase(unittest.HomeserverTestCase): server_to_client_transport.loseConnection() client_to_server_transport.loseConnection() - return channel.request + # there should have been exactly one request + self.assertEqual(len(requests), 1) + + return requests[0] def assert_request_is_get_repl_stream_updates( self, request: SynapseRequest, stream_name: str @@ -387,7 +397,7 @@ class BaseMultiWorkerStreamTestCase(unittest.HomeserverTestCase): client_protocol = client_factory.buildProtocol(None) # Set up the server side protocol - channel = _PushHTTPChannel(self.reactor, SynapseRequest, self._hs_to_site[hs]) + channel = self._hs_to_site[hs].buildProtocol(None) # Connect client to server and vice versa. client_to_server_transport = FakeTransport( @@ -445,112 +455,6 @@ class TestReplicationDataHandler(ReplicationDataHandler): self.received_rdata_rows.append((stream_name, token, r)) -class _PushHTTPChannel(HTTPChannel): - """A HTTPChannel that wraps pull producers to push producers. - - This is a hack to get around the fact that HTTPChannel transparently wraps a - pull producer (which is what Synapse uses to reply to requests) with - `_PullToPush` to convert it to a push producer. Unfortunately `_PullToPush` - uses the standard reactor rather than letting us use our test reactor, which - makes it very hard to test. - """ - - def __init__( - self, reactor: IReactorTime, request_factory: Type[Request], site: Site - ): - super().__init__() - self.reactor = reactor - self.requestFactory = request_factory - self.site = site - - self._pull_to_push_producer = None # type: Optional[_PullToPushProducer] - - def registerProducer(self, producer, streaming): - # Convert pull producers to push producer. - if not streaming: - self._pull_to_push_producer = _PullToPushProducer( - self.reactor, producer, self - ) - producer = self._pull_to_push_producer - - super().registerProducer(producer, True) - - def unregisterProducer(self): - if self._pull_to_push_producer: - # We need to manually stop the _PullToPushProducer. - self._pull_to_push_producer.stop() - - def checkPersistence(self, request, version): - """Check whether the connection can be re-used""" - # We hijack this to always say no for ease of wiring stuff up in - # `handle_http_replication_attempt`. - request.responseHeaders.setRawHeaders(b"connection", [b"close"]) - return False - - def requestDone(self, request): - # Store the request for inspection. - self.request = request - super().requestDone(request) - - -class _PullToPushProducer: - """A push producer that wraps a pull producer.""" - - def __init__( - self, reactor: IReactorTime, producer: IPullProducer, consumer: IConsumer - ): - self._clock = Clock(reactor) - self._producer = producer - self._consumer = consumer - - # While running we use a looping call with a zero delay to call - # resumeProducing on given producer. - self._looping_call = None # type: Optional[LoopingCall] - - # We start writing next reactor tick. - self._start_loop() - - def _start_loop(self): - """Start the looping call to""" - - if not self._looping_call: - # Start a looping call which runs every tick. - self._looping_call = self._clock.looping_call(self._run_once, 0) - - def stop(self): - """Stops calling resumeProducing.""" - if self._looping_call: - self._looping_call.stop() - self._looping_call = None - - def pauseProducing(self): - """Implements IPushProducer""" - self.stop() - - def resumeProducing(self): - """Implements IPushProducer""" - self._start_loop() - - def stopProducing(self): - """Implements IPushProducer""" - self.stop() - self._producer.stopProducing() - - def _run_once(self): - """Calls resumeProducing on producer once.""" - - try: - self._producer.resumeProducing() - except Exception: - logger.exception("Failed to call resumeProducing") - try: - self._consumer.unregisterProducer() - except Exception: - pass - - self.stopProducing() - - class FakeRedisPubSubServer: """A fake Redis server for pub/sub.""" diff --git a/tests/server.py b/tests/server.py index b535a5d886..9df8cda24f 100644 --- a/tests/server.py +++ b/tests/server.py @@ -603,12 +603,6 @@ class FakeTransport: if self.disconnected: return - if not hasattr(self.other, "transport"): - # the other has no transport yet; reschedule - if self.autoflush: - self._reactor.callLater(0.0, self.flush) - return - if maxbytes is not None: to_write = self.buffer[:maxbytes] else: -- cgit 1.5.1 From 3ff225175462dde8376aa584e3a47c43b1f0e790 Mon Sep 17 00:00:00 2001 From: Richard van der Hoff <1389908+richvdh@users.noreply.github.com> Date: Fri, 23 Apr 2021 19:20:44 +0100 Subject: Improved validation for received requests (#9817) * Simplify `start_listening` callpath * Correctly check the size of uploaded files --- changelog.d/9817.misc | 1 + synapse/api/constants.py | 3 ++ synapse/app/_base.py | 30 ++++++++++-- synapse/app/admin_cmd.py | 8 +-- synapse/app/generic_worker.py | 11 +++-- synapse/app/homeserver.py | 17 +++++-- synapse/config/logger.py | 3 +- synapse/event_auth.py | 4 +- synapse/http/site.py | 32 ++++++++++-- synapse/rest/media/v1/upload_resource.py | 2 - synapse/server.py | 8 +++ tests/http/test_site.py | 83 ++++++++++++++++++++++++++++++++ tests/replication/_base.py | 1 + tests/test_server.py | 1 + tests/unittest.py | 1 + 15 files changed, 174 insertions(+), 31 deletions(-) create mode 100644 changelog.d/9817.misc create mode 100644 tests/http/test_site.py diff --git a/changelog.d/9817.misc b/changelog.d/9817.misc new file mode 100644 index 0000000000..8aa8895f05 --- /dev/null +++ b/changelog.d/9817.misc @@ -0,0 +1 @@ +Fix a long-standing bug which caused `max_upload_size` to not be correctly enforced. diff --git a/synapse/api/constants.py b/synapse/api/constants.py index 31a59bceec..936b6534b4 100644 --- a/synapse/api/constants.py +++ b/synapse/api/constants.py @@ -17,6 +17,9 @@ """Contains constants from the specification.""" +# the max size of a (canonical-json-encoded) event +MAX_PDU_SIZE = 65536 + # the "depth" field on events is limited to 2**63 - 1 MAX_DEPTH = 2 ** 63 - 1 diff --git a/synapse/app/_base.py b/synapse/app/_base.py index 2113c4f370..638e01c1b2 100644 --- a/synapse/app/_base.py +++ b/synapse/app/_base.py @@ -30,9 +30,10 @@ from twisted.internet import defer, error, reactor from twisted.protocols.tls import TLSMemoryBIOFactory import synapse +from synapse.api.constants import MAX_PDU_SIZE from synapse.app import check_bind_error from synapse.app.phone_stats_home import start_phone_stats_home -from synapse.config.server import ListenerConfig +from synapse.config.homeserver import HomeServerConfig from synapse.crypto import context_factory from synapse.logging.context import PreserveLoggingContext from synapse.metrics.background_process_metrics import wrap_as_background_process @@ -288,7 +289,7 @@ def refresh_certificate(hs): logger.info("Context factories updated.") -async def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerConfig]): +async def start(hs: "synapse.server.HomeServer"): """ Start a Synapse server or worker. @@ -300,7 +301,6 @@ async def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerCon Args: hs: homeserver instance - listeners: Listener configuration ('listeners' in homeserver.yaml) """ # Set up the SIGHUP machinery. if hasattr(signal, "SIGHUP"): @@ -336,7 +336,7 @@ async def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerCon synapse.logging.opentracing.init_tracer(hs) # type: ignore[attr-defined] # noqa # It is now safe to start your Synapse. - hs.start_listening(listeners) + hs.start_listening() hs.get_datastore().db_pool.start_profiling() hs.get_pusherpool().start() @@ -530,3 +530,25 @@ def sdnotify(state): # this is a bit surprising, since we don't expect to have a NOTIFY_SOCKET # unless systemd is expecting us to notify it. logger.warning("Unable to send notification to systemd: %s", e) + + +def max_request_body_size(config: HomeServerConfig) -> int: + """Get a suitable maximum size for incoming HTTP requests""" + + # Other than media uploads, the biggest request we expect to see is a fully-loaded + # /federation/v1/send request. + # + # The main thing in such a request is up to 50 PDUs, and up to 100 EDUs. PDUs are + # limited to 65536 bytes (possibly slightly more if the sender didn't use canonical + # json encoding); there is no specced limit to EDUs (see + # https://github.com/matrix-org/matrix-doc/issues/3121). + # + # in short, we somewhat arbitrarily limit requests to 200 * 64K (about 12.5M) + # + max_request_size = 200 * MAX_PDU_SIZE + + # if we have a media repo enabled, we may need to allow larger uploads than that + if config.media.can_load_media_repo: + max_request_size = max(max_request_size, config.media.max_upload_size) + + return max_request_size diff --git a/synapse/app/admin_cmd.py b/synapse/app/admin_cmd.py index eb256db749..68ae19c977 100644 --- a/synapse/app/admin_cmd.py +++ b/synapse/app/admin_cmd.py @@ -70,12 +70,6 @@ class AdminCmdSlavedStore( class AdminCmdServer(HomeServer): DATASTORE_CLASS = AdminCmdSlavedStore - def _listen_http(self, listener_config): - pass - - def start_listening(self, listeners): - pass - async def export_data_command(hs, args): """Export data for a user. @@ -232,7 +226,7 @@ def start(config_options): async def run(): with LoggingContext("command"): - _base.start(ss, []) + _base.start(ss) await args.func(ss, args) _base.start_worker_reactor( diff --git a/synapse/app/generic_worker.py b/synapse/app/generic_worker.py index 70e07d0574..1a15ceee81 100644 --- a/synapse/app/generic_worker.py +++ b/synapse/app/generic_worker.py @@ -15,7 +15,7 @@ # limitations under the License. import logging import sys -from typing import Dict, Iterable, Optional +from typing import Dict, Optional from twisted.internet import address from twisted.web.resource import IResource @@ -32,7 +32,7 @@ from synapse.api.urls import ( SERVER_KEY_V2_PREFIX, ) from synapse.app import _base -from synapse.app._base import register_start +from synapse.app._base import max_request_body_size, register_start from synapse.config._base import ConfigError from synapse.config.homeserver import HomeServerConfig from synapse.config.logger import setup_logging @@ -367,6 +367,7 @@ class GenericWorkerServer(HomeServer): listener_config, root_resource, self.version_string, + max_request_body_size=max_request_body_size(self.config), reactor=self.get_reactor(), ), reactor=self.get_reactor(), @@ -374,8 +375,8 @@ class GenericWorkerServer(HomeServer): logger.info("Synapse worker now listening on port %d", port) - def start_listening(self, listeners: Iterable[ListenerConfig]): - for listener in listeners: + def start_listening(self): + for listener in self.config.worker_listeners: if listener.type == "http": self._listen_http(listener) elif listener.type == "manhole": @@ -468,7 +469,7 @@ def start(config_options): # streams. Will no-op if no streams can be written to by this worker. hs.get_replication_streamer() - register_start(_base.start, hs, config.worker_listeners) + register_start(_base.start, hs) _base.start_worker_reactor("synapse-generic-worker", config) diff --git a/synapse/app/homeserver.py b/synapse/app/homeserver.py index 140f6bcdee..8e78134bbe 100644 --- a/synapse/app/homeserver.py +++ b/synapse/app/homeserver.py @@ -17,7 +17,7 @@ import logging import os import sys -from typing import Iterable, Iterator +from typing import Iterator from twisted.internet import reactor from twisted.web.resource import EncodingResourceWrapper, IResource @@ -36,7 +36,13 @@ from synapse.api.urls import ( WEB_CLIENT_PREFIX, ) from synapse.app import _base -from synapse.app._base import listen_ssl, listen_tcp, quit_with_error, register_start +from synapse.app._base import ( + listen_ssl, + listen_tcp, + max_request_body_size, + quit_with_error, + register_start, +) from synapse.config._base import ConfigError from synapse.config.emailconfig import ThreepidBehaviour from synapse.config.homeserver import HomeServerConfig @@ -132,6 +138,7 @@ class SynapseHomeServer(HomeServer): listener_config, create_resource_tree(resources, root_resource), self.version_string, + max_request_body_size=max_request_body_size(self.config), reactor=self.get_reactor(), ) @@ -268,14 +275,14 @@ class SynapseHomeServer(HomeServer): return resources - def start_listening(self, listeners: Iterable[ListenerConfig]): + def start_listening(self): if self.config.redis_enabled: # If redis is enabled we connect via the replication command handler # in the same way as the workers (since we're effectively a client # rather than a server). self.get_tcp_replication().start_replication(self) - for listener in listeners: + for listener in self.config.server.listeners: if listener.type == "http": self._listening_services.extend( self._listener_http(self.config, listener) @@ -407,7 +414,7 @@ def setup(config_options): # Loading the provider metadata also ensures the provider config is valid. await oidc.load_metadata() - await _base.start(hs, config.listeners) + await _base.start(hs) hs.get_datastore().db_pool.updates.start_doing_background_updates() diff --git a/synapse/config/logger.py b/synapse/config/logger.py index b174e0df6d..813076dfe2 100644 --- a/synapse/config/logger.py +++ b/synapse/config/logger.py @@ -31,7 +31,6 @@ from twisted.logger import ( ) import synapse -from synapse.app import _base as appbase from synapse.logging._structured import setup_structured_logging from synapse.logging.context import LoggingContextFilter from synapse.logging.filter import MetadataFilter @@ -318,6 +317,8 @@ def setup_logging( # Perform one-time logging configuration. _setup_stdlib_logging(config, log_config_path, logBeginner=logBeginner) # Add a SIGHUP handler to reload the logging configuration, if one is available. + from synapse.app import _base as appbase + appbase.register_sighup(_reload_logging_config, log_config_path) # Log immediately so we can grep backwards. diff --git a/synapse/event_auth.py b/synapse/event_auth.py index afc2bc8267..70c556566e 100644 --- a/synapse/event_auth.py +++ b/synapse/event_auth.py @@ -21,7 +21,7 @@ from signedjson.key import decode_verify_key_bytes from signedjson.sign import SignatureVerifyException, verify_signed_json from unpaddedbase64 import decode_base64 -from synapse.api.constants import EventTypes, JoinRules, Membership +from synapse.api.constants import MAX_PDU_SIZE, EventTypes, JoinRules, Membership from synapse.api.errors import AuthError, EventSizeError, SynapseError from synapse.api.room_versions import ( KNOWN_ROOM_VERSIONS, @@ -205,7 +205,7 @@ def _check_size_limits(event: EventBase) -> None: too_big("type") if len(event.event_id) > 255: too_big("event_id") - if len(encode_canonical_json(event.get_pdu_json())) > 65536: + if len(encode_canonical_json(event.get_pdu_json())) > MAX_PDU_SIZE: too_big("event") diff --git a/synapse/http/site.py b/synapse/http/site.py index e911ee4809..671fd3fbcc 100644 --- a/synapse/http/site.py +++ b/synapse/http/site.py @@ -14,7 +14,7 @@ import contextlib import logging import time -from typing import Optional, Tuple, Type, Union +from typing import Optional, Tuple, Union import attr from zope.interface import implementer @@ -50,6 +50,7 @@ class SynapseRequest(Request): * Redaction of access_token query-params in __repr__ * Logging at start and end * Metrics to record CPU, wallclock and DB time by endpoint. + * A limit to the size of request which will be accepted It also provides a method `processing`, which returns a context manager. If this method is called, the request won't be logged until the context manager is closed; @@ -60,8 +61,9 @@ class SynapseRequest(Request): logcontext: the log context for this request """ - def __init__(self, channel, *args, **kw): + def __init__(self, channel, *args, max_request_body_size=1024, **kw): Request.__init__(self, channel, *args, **kw) + self._max_request_body_size = max_request_body_size self.site = channel.site # type: SynapseSite self._channel = channel # this is used by the tests self.start_time = 0.0 @@ -98,6 +100,18 @@ class SynapseRequest(Request): self.site.site_tag, ) + def handleContentChunk(self, data): + # we should have a `content` by now. + assert self.content, "handleContentChunk() called before gotLength()" + if self.content.tell() + len(data) > self._max_request_body_size: + logger.warning( + "Aborting connection from %s because the request exceeds maximum size", + self.client, + ) + self.transport.abortConnection() + return + super().handleContentChunk(data) + @property def requester(self) -> Optional[Union[Requester, str]]: return self._requester @@ -505,6 +519,7 @@ class SynapseSite(Site): config: ListenerConfig, resource: IResource, server_version_string, + max_request_body_size: int, reactor: IReactorTime, ): """ @@ -516,6 +531,8 @@ class SynapseSite(Site): resource: The base of the resource tree to be used for serving requests on this site server_version_string: A string to present for the Server header + max_request_body_size: Maximum request body length to allow before + dropping the connection reactor: reactor to be used to manage connection timeouts """ Site.__init__(self, resource, reactor=reactor) @@ -524,9 +541,14 @@ class SynapseSite(Site): assert config.http_options is not None proxied = config.http_options.x_forwarded - self.requestFactory = ( - XForwardedForRequest if proxied else SynapseRequest - ) # type: Type[Request] + request_class = XForwardedForRequest if proxied else SynapseRequest + + def request_factory(channel, queued) -> Request: + return request_class( + channel, max_request_body_size=max_request_body_size, queued=queued + ) + + self.requestFactory = request_factory # type: ignore self.access_logger = logging.getLogger(logger_name) self.server_version_string = server_version_string.encode("ascii") diff --git a/synapse/rest/media/v1/upload_resource.py b/synapse/rest/media/v1/upload_resource.py index 80f017a4dd..024a105bf2 100644 --- a/synapse/rest/media/v1/upload_resource.py +++ b/synapse/rest/media/v1/upload_resource.py @@ -51,8 +51,6 @@ class UploadResource(DirectServeJsonResource): async def _async_render_POST(self, request: SynapseRequest) -> None: requester = await self.auth.get_user_by_req(request) - # TODO: The checks here are a bit late. The content will have - # already been uploaded to a tmp file at this point content_length = request.getHeader("Content-Length") if content_length is None: raise SynapseError(msg="Request must specify a Content-Length", code=400) diff --git a/synapse/server.py b/synapse/server.py index 8c147be2b3..06570bb1ce 100644 --- a/synapse/server.py +++ b/synapse/server.py @@ -287,6 +287,14 @@ class HomeServer(metaclass=abc.ABCMeta): if self.config.run_background_tasks: self.setup_background_tasks() + def start_listening(self) -> None: + """Start the HTTP, manhole, metrics, etc listeners + + Does nothing in this base class; overridden in derived classes to start the + appropriate listeners. + """ + pass + def setup_background_tasks(self) -> None: """ Some handlers have side effects on instantiation (like registering diff --git a/tests/http/test_site.py b/tests/http/test_site.py new file mode 100644 index 0000000000..8c13b4f693 --- /dev/null +++ b/tests/http/test_site.py @@ -0,0 +1,83 @@ +# Copyright 2021 The Matrix.org Foundation C.I.C. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from twisted.internet.address import IPv6Address +from twisted.test.proto_helpers import StringTransport + +from synapse.app.homeserver import SynapseHomeServer + +from tests.unittest import HomeserverTestCase + + +class SynapseRequestTestCase(HomeserverTestCase): + def make_homeserver(self, reactor, clock): + return self.setup_test_homeserver(homeserver_to_use=SynapseHomeServer) + + def test_large_request(self): + """overlarge HTTP requests should be rejected""" + self.hs.start_listening() + + # find the HTTP server which is configured to listen on port 0 + (port, factory, _backlog, interface) = self.reactor.tcpServers[0] + self.assertEqual(interface, "::") + self.assertEqual(port, 0) + + # as a control case, first send a regular request. + + # complete the connection and wire it up to a fake transport + client_address = IPv6Address("TCP", "::1", "2345") + protocol = factory.buildProtocol(client_address) + transport = StringTransport() + protocol.makeConnection(transport) + + protocol.dataReceived( + b"POST / HTTP/1.1\r\n" + b"Connection: close\r\n" + b"Transfer-Encoding: chunked\r\n" + b"\r\n" + b"0\r\n" + b"\r\n" + ) + + while not transport.disconnecting: + self.reactor.advance(1) + + # we should get a 404 + self.assertRegex(transport.value().decode(), r"^HTTP/1\.1 404 ") + + # now send an oversized request + protocol = factory.buildProtocol(client_address) + transport = StringTransport() + protocol.makeConnection(transport) + + protocol.dataReceived( + b"POST / HTTP/1.1\r\n" + b"Connection: close\r\n" + b"Transfer-Encoding: chunked\r\n" + b"\r\n" + ) + + # we deliberately send all the data in one big chunk, to ensure that + # twisted isn't buffering the data in the chunked transfer decoder. + # we start with the chunk size, in hex. (We won't actually send this much) + protocol.dataReceived(b"10000000\r\n") + sent = 0 + while not transport.disconnected: + self.assertLess(sent, 0x10000000, "connection did not drop") + protocol.dataReceived(b"\0" * 1024) + sent += 1024 + + # default max upload size is 50M, so it should drop on the next buffer after + # that. + self.assertEqual(sent, 50 * 1024 * 1024 + 1024) diff --git a/tests/replication/_base.py b/tests/replication/_base.py index dc3519ea13..624bd1b927 100644 --- a/tests/replication/_base.py +++ b/tests/replication/_base.py @@ -359,6 +359,7 @@ class BaseMultiWorkerStreamTestCase(unittest.HomeserverTestCase): config=worker_hs.config.server.listeners[0], resource=resource, server_version_string="1", + max_request_body_size=4096, reactor=self.reactor, ) diff --git a/tests/test_server.py b/tests/test_server.py index 45400be367..407e172e41 100644 --- a/tests/test_server.py +++ b/tests/test_server.py @@ -202,6 +202,7 @@ class OptionsResourceTests(unittest.TestCase): parse_listener_def({"type": "http", "port": 0}), self.resource, "1.0", + max_request_body_size=1234, reactor=self.reactor, ) diff --git a/tests/unittest.py b/tests/unittest.py index 5353e75c7c..9bd02bd9c4 100644 --- a/tests/unittest.py +++ b/tests/unittest.py @@ -247,6 +247,7 @@ class HomeserverTestCase(TestCase): config=self.hs.config.server.listeners[0], resource=self.resource, server_version_string="1", + max_request_body_size=1234, reactor=self.reactor, ) -- cgit 1.5.1 From 0ffa5fb935ac9285217d957403861d2e3327e109 Mon Sep 17 00:00:00 2001 From: Erik Johnston Date: Tue, 27 Apr 2021 10:09:41 +0100 Subject: Use current state table for `presence.get_interested_remotes` (#9887) This should be a lot quicker than asking the state handler. --- changelog.d/9887.misc | 1 + synapse/handlers/presence.py | 9 ++------- 2 files changed, 3 insertions(+), 7 deletions(-) create mode 100644 changelog.d/9887.misc diff --git a/changelog.d/9887.misc b/changelog.d/9887.misc new file mode 100644 index 0000000000..650ebf85e6 --- /dev/null +++ b/changelog.d/9887.misc @@ -0,0 +1 @@ +Small performance improvement around handling new local presence updates. diff --git a/synapse/handlers/presence.py b/synapse/handlers/presence.py index 9938be3821..969c73c1e7 100644 --- a/synapse/handlers/presence.py +++ b/synapse/handlers/presence.py @@ -58,7 +58,6 @@ from synapse.replication.http.presence import ( from synapse.replication.http.streams import ReplicationGetStreamUpdates from synapse.replication.tcp.commands import ClearUserSyncsCommand from synapse.replication.tcp.streams import PresenceFederationStream, PresenceStream -from synapse.state import StateHandler from synapse.storage.databases.main import DataStore from synapse.types import JsonDict, UserID, get_domain_from_id from synapse.util.async_helpers import Linearizer @@ -291,7 +290,6 @@ class BasePresenceHandler(abc.ABC): self.store, self.presence_router, states, - self.state, ) for destinations, states in hosts_and_states: @@ -757,7 +755,6 @@ class PresenceHandler(BasePresenceHandler): self.store, self.presence_router, list(to_federation_ping.values()), - self.state, ) for destinations, states in hosts_and_states: @@ -1384,7 +1381,6 @@ class PresenceEventSource: self.get_presence_router = hs.get_presence_router self.clock = hs.get_clock() self.store = hs.get_datastore() - self.state = hs.get_state_handler() @log_function async def get_new_events( @@ -1853,7 +1849,6 @@ async def get_interested_remotes( store: DataStore, presence_router: PresenceRouter, states: List[UserPresenceState], - state_handler: StateHandler, ) -> List[Tuple[Collection[str], List[UserPresenceState]]]: """Given a list of presence states figure out which remote servers should be sent which. @@ -1864,7 +1859,6 @@ async def get_interested_remotes( store: The homeserver's data store. presence_router: A module for augmenting the destinations for presence updates. states: A list of incoming user presence updates. - state_handler: Returns: A list of 2-tuples of destinations and states, where for @@ -1881,7 +1875,8 @@ async def get_interested_remotes( ) for room_id, states in room_ids_to_states.items(): - hosts = await state_handler.get_current_hosts_in_room(room_id) + user_ids = await store.get_users_in_room(room_id) + hosts = {get_domain_from_id(user_id) for user_id in user_ids} hosts_and_states.append((hosts, states)) for user_id, states in users_to_states.items(): -- cgit 1.5.1 From 1350b053da45c94722cd8acf9cfd367db787259c Mon Sep 17 00:00:00 2001 From: Patrick Cloke Date: Tue, 27 Apr 2021 07:30:34 -0400 Subject: Pass errors back to the client when trying multiple federation destinations. (#9868) This ensures that something like an auth error (403) will be returned to the requester instead of attempting to try more servers, which will likely result in the same error, and then passing back a generic 400 error. --- changelog.d/9868.bugfix | 1 + synapse/federation/federation_client.py | 118 ++++++++++++++++---------------- 2 files changed, 61 insertions(+), 58 deletions(-) create mode 100644 changelog.d/9868.bugfix diff --git a/changelog.d/9868.bugfix b/changelog.d/9868.bugfix new file mode 100644 index 0000000000..e2b4f97ad5 --- /dev/null +++ b/changelog.d/9868.bugfix @@ -0,0 +1 @@ +Fix a long-standing bug where errors from federation did not propagate to the client. diff --git a/synapse/federation/federation_client.py b/synapse/federation/federation_client.py index f93335edaa..a5b6a61195 100644 --- a/synapse/federation/federation_client.py +++ b/synapse/federation/federation_client.py @@ -451,6 +451,28 @@ class FederationClient(FederationBase): return signed_auth + def _is_unknown_endpoint( + self, e: HttpResponseException, synapse_error: Optional[SynapseError] = None + ) -> bool: + """ + Returns true if the response was due to an endpoint being unimplemented. + + Args: + e: The error response received from the remote server. + synapse_error: The above error converted to a SynapseError. This is + automatically generated if not provided. + + """ + if synapse_error is None: + synapse_error = e.to_synapse_error() + # There is no good way to detect an "unknown" endpoint. + # + # Dendrite returns a 404 (with no body); synapse returns a 400 + # with M_UNRECOGNISED. + return e.code == 404 or ( + e.code == 400 and synapse_error.errcode == Codes.UNRECOGNIZED + ) + async def _try_destination_list( self, description: str, @@ -468,9 +490,9 @@ class FederationClient(FederationBase): callback: Function to run for each server. Passed a single argument: the server_name to try. - If the callback raises a CodeMessageException with a 300/400 code, - attempts to perform the operation stop immediately and the exception is - reraised. + If the callback raises a CodeMessageException with a 300/400 code or + an UnsupportedRoomVersionError, attempts to perform the operation + stop immediately and the exception is reraised. Otherwise, if the callback raises an Exception the error is logged and the next server tried. Normally the stacktrace is logged but this is @@ -492,8 +514,7 @@ class FederationClient(FederationBase): continue try: - res = await callback(destination) - return res + return await callback(destination) except InvalidResponseError as e: logger.warning("Failed to %s via %s: %s", description, destination, e) except UnsupportedRoomVersionError: @@ -502,17 +523,15 @@ class FederationClient(FederationBase): synapse_error = e.to_synapse_error() failover = False + # Failover on an internal server error, or if the destination + # doesn't implemented the endpoint for some reason. if 500 <= e.code < 600: failover = True - elif failover_on_unknown_endpoint: - # there is no good way to detect an "unknown" endpoint. Dendrite - # returns a 404 (with no body); synapse returns a 400 - # with M_UNRECOGNISED. - if e.code == 404 or ( - e.code == 400 and synapse_error.errcode == Codes.UNRECOGNIZED - ): - failover = True + elif failover_on_unknown_endpoint and self._is_unknown_endpoint( + e, synapse_error + ): + failover = True if not failover: raise synapse_error from e @@ -570,9 +589,8 @@ class FederationClient(FederationBase): UnsupportedRoomVersionError: if remote responds with a room version we don't understand. - SynapseError: if the chosen remote server returns a 300/400 code. - - RuntimeError: if no servers were reachable. + SynapseError: if the chosen remote server returns a 300/400 code, or + no servers successfully handle the request. """ valid_memberships = {Membership.JOIN, Membership.LEAVE} if membership not in valid_memberships: @@ -642,9 +660,8 @@ class FederationClient(FederationBase): ``auth_chain``. Raises: - SynapseError: if the chosen remote server returns a 300/400 code. - - RuntimeError: if no servers were reachable. + SynapseError: if the chosen remote server returns a 300/400 code, or + no servers successfully handle the request. """ async def send_request(destination) -> Dict[str, Any]: @@ -673,7 +690,7 @@ class FederationClient(FederationBase): if create_event is None: # If the state doesn't have a create event then the room is # invalid, and it would fail auth checks anyway. - raise SynapseError(400, "No create event in state") + raise InvalidResponseError("No create event in state") # the room version should be sane. create_room_version = create_event.content.get( @@ -746,16 +763,11 @@ class FederationClient(FederationBase): content=pdu.get_pdu_json(time_now), ) except HttpResponseException as e: - if e.code in [400, 404]: - err = e.to_synapse_error() - - # If we receive an error response that isn't a generic error, or an - # unrecognised endpoint error, we assume that the remote understands - # the v2 invite API and this is a legitimate error. - if err.errcode not in [Codes.UNKNOWN, Codes.UNRECOGNIZED]: - raise err - else: - raise e.to_synapse_error() + # If an error is received that is due to an unrecognised endpoint, + # fallback to the v1 endpoint. Otherwise consider it a legitmate error + # and raise. + if not self._is_unknown_endpoint(e): + raise logger.debug("Couldn't send_join with the v2 API, falling back to the v1 API") @@ -802,6 +814,11 @@ class FederationClient(FederationBase): Returns: The event as a dict as returned by the remote server + + Raises: + SynapseError: if the remote server returns an error or if the server + only supports the v1 endpoint and a room version other than "1" + or "2" is requested. """ time_now = self._clock.time_msec() @@ -817,28 +834,19 @@ class FederationClient(FederationBase): }, ) except HttpResponseException as e: - if e.code in [400, 404]: - err = e.to_synapse_error() - - # If we receive an error response that isn't a generic error, we - # assume that the remote understands the v2 invite API and this - # is a legitimate error. - if err.errcode != Codes.UNKNOWN: - raise err - - # Otherwise, we assume that the remote server doesn't understand - # the v2 invite API. That's ok provided the room uses old-style event - # IDs. + # If an error is received that is due to an unrecognised endpoint, + # fallback to the v1 endpoint if the room uses old-style event IDs. + # Otherwise consider it a legitmate error and raise. + err = e.to_synapse_error() + if self._is_unknown_endpoint(e, err): if room_version.event_format != EventFormatVersions.V1: raise SynapseError( 400, "User's homeserver does not support this room version", Codes.UNSUPPORTED_ROOM_VERSION, ) - elif e.code in (403, 429): - raise e.to_synapse_error() else: - raise + raise err # Didn't work, try v1 API. # Note the v1 API returns a tuple of `(200, content)` @@ -865,9 +873,8 @@ class FederationClient(FederationBase): pdu: event to be sent Raises: - SynapseError if the chosen remote server returns a 300/400 code. - - RuntimeError if no servers were reachable. + SynapseError: if the chosen remote server returns a 300/400 code, or + no servers successfully handle the request. """ async def send_request(destination: str) -> None: @@ -889,16 +896,11 @@ class FederationClient(FederationBase): content=pdu.get_pdu_json(time_now), ) except HttpResponseException as e: - if e.code in [400, 404]: - err = e.to_synapse_error() - - # If we receive an error response that isn't a generic error, or an - # unrecognised endpoint error, we assume that the remote understands - # the v2 invite API and this is a legitimate error. - if err.errcode not in [Codes.UNKNOWN, Codes.UNRECOGNIZED]: - raise err - else: - raise e.to_synapse_error() + # If an error is received that is due to an unrecognised endpoint, + # fallback to the v1 endpoint. Otherwise consider it a legitmate error + # and raise. + if not self._is_unknown_endpoint(e): + raise logger.debug("Couldn't send_leave with the v2 API, falling back to the v1 API") -- cgit 1.5.1 From fe604a022a7142157da7e90a40330beb2a11af7a Mon Sep 17 00:00:00 2001 From: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com> Date: Tue, 27 Apr 2021 13:13:07 +0100 Subject: Remove various bits of compatibility code for Python <3.6 (#9879) I went through and removed a bunch of cruft that was lying around for compatibility with old Python versions. This PR also will now prevent Synapse from starting unless you're running Python 3.6+. --- changelog.d/9879.misc | 1 + mypy.ini | 1 - synapse/__init__.py | 4 +-- synapse/python_dependencies.py | 9 ++----- synapse/rest/admin/users.py | 3 ++- synapse/rest/consent/consent_resource.py | 10 +------- synapse/rest/media/v1/filepath.py | 2 +- synapse/secrets.py | 44 -------------------------------- synapse/server.py | 5 ---- synapse/storage/_base.py | 2 +- synapse/storage/database.py | 15 +++++------ synapse/util/caches/response_cache.py | 2 +- tests/rest/admin/test_user.py | 15 +++++------ tests/storage/test__base.py | 3 ++- tests/unittest.py | 2 +- tox.ini | 9 +++---- 16 files changed, 29 insertions(+), 98 deletions(-) create mode 100644 changelog.d/9879.misc delete mode 100644 synapse/secrets.py diff --git a/changelog.d/9879.misc b/changelog.d/9879.misc new file mode 100644 index 0000000000..c9ca37cf48 --- /dev/null +++ b/changelog.d/9879.misc @@ -0,0 +1 @@ +Remove backwards-compatibility code for Python versions < 3.6. \ No newline at end of file diff --git a/mypy.ini b/mypy.ini index 32e6197409..a40f705b76 100644 --- a/mypy.ini +++ b/mypy.ini @@ -41,7 +41,6 @@ files = synapse/push, synapse/replication, synapse/rest, - synapse/secrets.py, synapse/server.py, synapse/server_notices, synapse/spam_checker_api, diff --git a/synapse/__init__.py b/synapse/__init__.py index 837e938f56..fbd49a93e1 100644 --- a/synapse/__init__.py +++ b/synapse/__init__.py @@ -21,8 +21,8 @@ import os import sys # Check that we're not running on an unsupported Python version. -if sys.version_info < (3, 5): - print("Synapse requires Python 3.5 or above.") +if sys.version_info < (3, 6): + print("Synapse requires Python 3.6 or above.") sys.exit(1) # Twisted and canonicaljson will fail to import when this file is executed to diff --git a/synapse/python_dependencies.py b/synapse/python_dependencies.py index 2a1c925ee8..2de946f464 100644 --- a/synapse/python_dependencies.py +++ b/synapse/python_dependencies.py @@ -85,7 +85,7 @@ REQUIREMENTS = [ "typing-extensions>=3.7.4", # We enforce that we have a `cryptography` version that bundles an `openssl` # with the latest security patches. - "cryptography>=3.4.7;python_version>='3.6'", + "cryptography>=3.4.7", ] CONDITIONAL_REQUIREMENTS = { @@ -100,14 +100,9 @@ CONDITIONAL_REQUIREMENTS = { # that use the protocol, such as Let's Encrypt. "acme": [ "txacme>=0.9.2", - # txacme depends on eliot. Eliot 1.8.0 is incompatible with - # python 3.5.2, as per https://github.com/itamarst/eliot/issues/418 - "eliot<1.8.0;python_version<'3.5.3'", ], "saml2": [ - # pysaml2 6.4.0 is incompatible with Python 3.5 (see https://github.com/IdentityPython/pysaml2/issues/749) - "pysaml2>=4.5.0,<6.4.0;python_version<'3.6'", - "pysaml2>=4.5.0;python_version>='3.6'", + "pysaml2>=4.5.0", ], "oidc": ["authlib>=0.14.0"], # systemd-python is necessary for logging to the systemd journal via diff --git a/synapse/rest/admin/users.py b/synapse/rest/admin/users.py index edda7861fa..8c9d21d3ea 100644 --- a/synapse/rest/admin/users.py +++ b/synapse/rest/admin/users.py @@ -14,6 +14,7 @@ import hashlib import hmac import logging +import secrets from http import HTTPStatus from typing import TYPE_CHECKING, Dict, List, Optional, Tuple @@ -375,7 +376,7 @@ class UserRegisterServlet(RestServlet): """ self._clear_old_nonces() - nonce = self.hs.get_secrets().token_hex(64) + nonce = secrets.token_hex(64) self.nonces[nonce] = int(self.reactor.seconds()) return 200, {"nonce": nonce} diff --git a/synapse/rest/consent/consent_resource.py b/synapse/rest/consent/consent_resource.py index c4550d3cf0..b19cd8afc5 100644 --- a/synapse/rest/consent/consent_resource.py +++ b/synapse/rest/consent/consent_resource.py @@ -32,14 +32,6 @@ TEMPLATE_LANGUAGE = "en" logger = logging.getLogger(__name__) -# use hmac.compare_digest if we have it (python 2.7.7), else just use equality -if hasattr(hmac, "compare_digest"): - compare_digest = hmac.compare_digest -else: - - def compare_digest(a, b): - return a == b - class ConsentResource(DirectServeHtmlResource): """A twisted Resource to display a privacy policy and gather consent to it @@ -209,5 +201,5 @@ class ConsentResource(DirectServeHtmlResource): .encode("ascii") ) - if not compare_digest(want_mac, userhmac): + if not hmac.compare_digest(want_mac, userhmac): raise SynapseError(HTTPStatus.FORBIDDEN, "HMAC incorrect") diff --git a/synapse/rest/media/v1/filepath.py b/synapse/rest/media/v1/filepath.py index 4088e7a059..09531ebf54 100644 --- a/synapse/rest/media/v1/filepath.py +++ b/synapse/rest/media/v1/filepath.py @@ -21,7 +21,7 @@ from typing import Callable, List NEW_FORMAT_ID_RE = re.compile(r"^\d\d\d\d-\d\d-\d\d") -def _wrap_in_base_path(func: "Callable[..., str]") -> "Callable[..., str]": +def _wrap_in_base_path(func: Callable[..., str]) -> Callable[..., str]: """Takes a function that returns a relative path and turns it into an absolute path based on the location of the primary media store """ diff --git a/synapse/secrets.py b/synapse/secrets.py deleted file mode 100644 index bf829251fd..0000000000 --- a/synapse/secrets.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright 2018 New Vector Ltd -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -""" -Injectable secrets module for Synapse. - -See https://docs.python.org/3/library/secrets.html#module-secrets for the API -used in Python 3.6, and the API emulated in Python 2.7. -""" -import sys - -# secrets is available since python 3.6 -if sys.version_info[0:2] >= (3, 6): - import secrets - - class Secrets: - def token_bytes(self, nbytes: int = 32) -> bytes: - return secrets.token_bytes(nbytes) - - def token_hex(self, nbytes: int = 32) -> str: - return secrets.token_hex(nbytes) - - -else: - import binascii - import os - - class Secrets: - def token_bytes(self, nbytes: int = 32) -> bytes: - return os.urandom(nbytes) - - def token_hex(self, nbytes: int = 32) -> str: - return binascii.hexlify(self.token_bytes(nbytes)).decode("ascii") diff --git a/synapse/server.py b/synapse/server.py index 06570bb1ce..2337d2d9b4 100644 --- a/synapse/server.py +++ b/synapse/server.py @@ -126,7 +126,6 @@ from synapse.rest.media.v1.media_repository import ( MediaRepository, MediaRepositoryResource, ) -from synapse.secrets import Secrets from synapse.server_notices.server_notices_manager import ServerNoticesManager from synapse.server_notices.server_notices_sender import ServerNoticesSender from synapse.server_notices.worker_server_notices_sender import ( @@ -641,10 +640,6 @@ class HomeServer(metaclass=abc.ABCMeta): def get_groups_attestation_renewer(self) -> GroupAttestionRenewer: return GroupAttestionRenewer(self) - @cache_in_self - def get_secrets(self) -> Secrets: - return Secrets() - @cache_in_self def get_stats_handler(self) -> StatsHandler: return StatsHandler(self) diff --git a/synapse/storage/_base.py b/synapse/storage/_base.py index d472676acf..6b68d8720c 100644 --- a/synapse/storage/_base.py +++ b/synapse/storage/_base.py @@ -114,7 +114,7 @@ def db_to_json(db_content: Union[memoryview, bytes, bytearray, str]) -> Any: db_content = db_content.tobytes() # Decode it to a Unicode string before feeding it to the JSON decoder, since - # Python 3.5 does not support deserializing bytes. + # it only supports handling strings if isinstance(db_content, (bytes, bytearray)): db_content = db_content.decode("utf8") diff --git a/synapse/storage/database.py b/synapse/storage/database.py index 9452368bf0..bd39c095af 100644 --- a/synapse/storage/database.py +++ b/synapse/storage/database.py @@ -171,10 +171,7 @@ class LoggingDatabaseConnection: # The type of entry which goes on our after_callbacks and exception_callbacks lists. -# -# Python 3.5.2 doesn't support Callable with an ellipsis, so we wrap it in quotes so -# that mypy sees the type but the runtime python doesn't. -_CallbackListEntry = Tuple["Callable[..., None]", Iterable[Any], Dict[str, Any]] +_CallbackListEntry = Tuple[Callable[..., None], Iterable[Any], Dict[str, Any]] R = TypeVar("R") @@ -221,7 +218,7 @@ class LoggingTransaction: self.after_callbacks = after_callbacks self.exception_callbacks = exception_callbacks - def call_after(self, callback: "Callable[..., None]", *args: Any, **kwargs: Any): + def call_after(self, callback: Callable[..., None], *args: Any, **kwargs: Any): """Call the given callback on the main twisted thread after the transaction has finished. Used to invalidate the caches on the correct thread. @@ -233,7 +230,7 @@ class LoggingTransaction: self.after_callbacks.append((callback, args, kwargs)) def call_on_exception( - self, callback: "Callable[..., None]", *args: Any, **kwargs: Any + self, callback: Callable[..., None], *args: Any, **kwargs: Any ): # if self.exception_callbacks is None, that means that whatever constructed the # LoggingTransaction isn't expecting there to be any callbacks; assert that @@ -485,7 +482,7 @@ class DatabasePool: desc: str, after_callbacks: List[_CallbackListEntry], exception_callbacks: List[_CallbackListEntry], - func: "Callable[..., R]", + func: Callable[..., R], *args: Any, **kwargs: Any, ) -> R: @@ -618,7 +615,7 @@ class DatabasePool: async def runInteraction( self, desc: str, - func: "Callable[..., R]", + func: Callable[..., R], *args: Any, db_autocommit: bool = False, **kwargs: Any, @@ -678,7 +675,7 @@ class DatabasePool: async def runWithConnection( self, - func: "Callable[..., R]", + func: Callable[..., R], *args: Any, db_autocommit: bool = False, **kwargs: Any, diff --git a/synapse/util/caches/response_cache.py b/synapse/util/caches/response_cache.py index 2529845c9e..25ea1bcc91 100644 --- a/synapse/util/caches/response_cache.py +++ b/synapse/util/caches/response_cache.py @@ -110,7 +110,7 @@ class ResponseCache(Generic[T]): return result.observe() def wrap( - self, key: T, callback: "Callable[..., Any]", *args: Any, **kwargs: Any + self, key: T, callback: Callable[..., Any], *args: Any, **kwargs: Any ) -> defer.Deferred: """Wrap together a *get* and *set* call, taking care of logcontexts diff --git a/tests/rest/admin/test_user.py b/tests/rest/admin/test_user.py index b3afd51522..d599a4c984 100644 --- a/tests/rest/admin/test_user.py +++ b/tests/rest/admin/test_user.py @@ -18,7 +18,7 @@ import json import urllib.parse from binascii import unhexlify from typing import List, Optional -from unittest.mock import Mock +from unittest.mock import Mock, patch import synapse.rest.admin from synapse.api.constants import UserTypes @@ -54,8 +54,6 @@ class UserRegisterTestCase(unittest.HomeserverTestCase): self.datastore = Mock(return_value=Mock()) self.datastore.get_current_state_deltas = Mock(return_value=(0, [])) - self.secrets = Mock() - self.hs = self.setup_test_homeserver() self.hs.config.registration_shared_secret = "shared" @@ -84,14 +82,13 @@ class UserRegisterTestCase(unittest.HomeserverTestCase): Calling GET on the endpoint will return a randomised nonce, using the homeserver's secrets provider. """ - secrets = Mock() - secrets.token_hex = Mock(return_value="abcd") - - self.hs.get_secrets = Mock(return_value=secrets) + with patch("secrets.token_hex") as token_hex: + # Patch secrets.token_hex for the duration of this context + token_hex.return_value = "abcd" - channel = self.make_request("GET", self.url) + channel = self.make_request("GET", self.url) - self.assertEqual(channel.json_body, {"nonce": "abcd"}) + self.assertEqual(channel.json_body, {"nonce": "abcd"}) def test_expired_nonce(self): """ diff --git a/tests/storage/test__base.py b/tests/storage/test__base.py index 6339a43f0c..200b9198f9 100644 --- a/tests/storage/test__base.py +++ b/tests/storage/test__base.py @@ -13,6 +13,7 @@ # See the License for the specific language governing permissions and # limitations under the License. +import secrets from tests import unittest @@ -21,7 +22,7 @@ class UpsertManyTests(unittest.HomeserverTestCase): def prepare(self, reactor, clock, hs): self.storage = hs.get_datastore() - self.table_name = "table_" + hs.get_secrets().token_hex(6) + self.table_name = "table_" + secrets.token_hex(6) self.get_success( self.storage.db_pool.runInteraction( "create", diff --git a/tests/unittest.py b/tests/unittest.py index 9bd02bd9c4..74db7c08f1 100644 --- a/tests/unittest.py +++ b/tests/unittest.py @@ -18,6 +18,7 @@ import hashlib import hmac import inspect import logging +import secrets import time from typing import Callable, Dict, Iterable, Optional, Tuple, Type, TypeVar, Union from unittest.mock import Mock, patch @@ -626,7 +627,6 @@ class HomeserverTestCase(TestCase): str: The new event's ID. """ event_creator = self.hs.get_event_creation_handler() - secrets = self.hs.get_secrets() requester = create_requester(user) event, context = self.get_success( diff --git a/tox.ini b/tox.ini index 998b04b224..ecd609271d 100644 --- a/tox.ini +++ b/tox.ini @@ -21,13 +21,11 @@ deps = # installed on that). # # anyway, make sure that we have a recent enough setuptools. - setuptools>=18.5 ; python_version >= '3.6' - setuptools>=18.5,<51.0.0 ; python_version < '3.6' + setuptools>=18.5 # we also need a semi-recent version of pip, because old ones fail to # install the "enum34" dependency of cryptography. - pip>=10 ; python_version >= '3.6' - pip>=10,<21.0 ; python_version < '3.6' + pip>=10 # directories/files we run the linters on. # if you update this list, make sure to do the same in scripts-dev/lint.sh @@ -168,8 +166,7 @@ skip_install = true usedevelop = false deps = coverage - pip>=10 ; python_version >= '3.6' - pip>=10,<21.0 ; python_version < '3.6' + pip>=10 commands= coverage combine coverage report -- cgit 1.5.1 From 4e0fd35bc918b6901fcd29371ab6d89db8ce1b5e Mon Sep 17 00:00:00 2001 From: Andrew Morgan Date: Wed, 28 Apr 2021 11:04:38 +0100 Subject: Revert "Experimental Federation Speedup (#9702)" This reverts commit 05e8c70c059f8ebb066e029bc3aa3e0cefef1019. --- changelog.d/9702.misc | 1 - contrib/experiments/test_messaging.py | 42 +++--- synapse/federation/sender/__init__.py | 145 ++++++++------------- synapse/federation/sender/per_destination_queue.py | 15 +-- synapse/storage/databases/main/transactions.py | 28 ++-- 5 files changed, 93 insertions(+), 138 deletions(-) delete mode 100644 changelog.d/9702.misc diff --git a/changelog.d/9702.misc b/changelog.d/9702.misc deleted file mode 100644 index c6e63450a9..0000000000 --- a/changelog.d/9702.misc +++ /dev/null @@ -1 +0,0 @@ -Speed up federation transmission by using fewer database calls. Contributed by @ShadowJonathan. diff --git a/contrib/experiments/test_messaging.py b/contrib/experiments/test_messaging.py index 5dd172052b..31b8a68225 100644 --- a/contrib/experiments/test_messaging.py +++ b/contrib/experiments/test_messaging.py @@ -224,16 +224,14 @@ class HomeServer(ReplicationHandler): destinations = yield self.get_servers_for_context(room_name) try: - yield self.replication_layer.send_pdus( - [ - Pdu.create_new( - context=room_name, - pdu_type="sy.room.message", - content={"sender": sender, "body": body}, - origin=self.server_name, - destinations=destinations, - ) - ] + yield self.replication_layer.send_pdu( + Pdu.create_new( + context=room_name, + pdu_type="sy.room.message", + content={"sender": sender, "body": body}, + origin=self.server_name, + destinations=destinations, + ) ) except Exception as e: logger.exception(e) @@ -255,7 +253,7 @@ class HomeServer(ReplicationHandler): origin=self.server_name, destinations=destinations, ) - yield self.replication_layer.send_pdus([pdu]) + yield self.replication_layer.send_pdu(pdu) except Exception as e: logger.exception(e) @@ -267,18 +265,16 @@ class HomeServer(ReplicationHandler): destinations = yield self.get_servers_for_context(room_name) try: - yield self.replication_layer.send_pdus( - [ - Pdu.create_new( - context=room_name, - is_state=True, - pdu_type="sy.room.member", - state_key=invitee, - content={"membership": "invite"}, - origin=self.server_name, - destinations=destinations, - ) - ] + yield self.replication_layer.send_pdu( + Pdu.create_new( + context=room_name, + is_state=True, + pdu_type="sy.room.member", + state_key=invitee, + content={"membership": "invite"}, + origin=self.server_name, + destinations=destinations, + ) ) except Exception as e: logger.exception(e) diff --git a/synapse/federation/sender/__init__.py b/synapse/federation/sender/__init__.py index 022bbf7dad..deb40f4610 100644 --- a/synapse/federation/sender/__init__.py +++ b/synapse/federation/sender/__init__.py @@ -14,26 +14,19 @@ import abc import logging -from typing import ( - TYPE_CHECKING, - Collection, - Dict, - Hashable, - Iterable, - List, - Optional, - Set, - Tuple, -) +from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Set, Tuple from prometheus_client import Counter +from twisted.internet import defer + import synapse.metrics from synapse.api.presence import UserPresenceState from synapse.events import EventBase from synapse.federation.sender.per_destination_queue import PerDestinationQueue from synapse.federation.sender.transaction_manager import TransactionManager from synapse.federation.units import Edu +from synapse.logging.context import make_deferred_yieldable, run_in_background from synapse.metrics import ( LaterGauge, event_processing_loop_counter, @@ -262,27 +255,15 @@ class FederationSender(AbstractFederationSender): if not events and next_token >= self._last_poked_id: break - async def get_destinations_for_event( - event: EventBase, - ) -> Collection[str]: - """Computes the destinations to which this event must be sent. - - This returns an empty tuple when there are no destinations to send to, - or if this event is not from this homeserver and it is not sending - it on behalf of another server. - - Will also filter out destinations which this sender is not responsible for, - if multiple federation senders exist. - """ - + async def handle_event(event: EventBase) -> None: # Only send events for this server. send_on_behalf_of = event.internal_metadata.get_send_on_behalf_of() is_mine = self.is_mine_id(event.sender) if not is_mine and send_on_behalf_of is None: - return () + return if not event.internal_metadata.should_proactively_send(): - return () + return destinations = None # type: Optional[Set[str]] if not event.prev_event_ids(): @@ -317,7 +298,7 @@ class FederationSender(AbstractFederationSender): "Failed to calculate hosts in room for event: %s", event.event_id, ) - return () + return destinations = { d @@ -327,15 +308,17 @@ class FederationSender(AbstractFederationSender): ) } - destinations.discard(self.server_name) - if send_on_behalf_of is not None: # If we are sending the event on behalf of another server # then it already has the event and there is no reason to # send the event to it. destinations.discard(send_on_behalf_of) + logger.debug("Sending %s to %r", event, destinations) + if destinations: + await self._send_pdu(event, destinations) + now = self.clock.time_msec() ts = await self.store.get_received_ts(event.event_id) @@ -343,29 +326,24 @@ class FederationSender(AbstractFederationSender): "federation_sender" ).observe((now - ts) / 1000) - return destinations - return () - - async def get_federatable_events_and_destinations( - events: Iterable[EventBase], - ) -> List[Tuple[EventBase, Collection[str]]]: - with Measure(self.clock, "get_destinations_for_events"): - # Fetch federation destinations per event, - # skip if get_destinations_for_event returns an empty collection, - # return list of event->destinations pairs. - return [ - (event, dests) - for (event, dests) in [ - (event, await get_destinations_for_event(event)) - for event in events - ] - if dests - ] - - events_and_dests = await get_federatable_events_and_destinations(events) - - # Send corresponding events to each destination queue - await self._distribute_events(events_and_dests) + async def handle_room_events(events: Iterable[EventBase]) -> None: + with Measure(self.clock, "handle_room_events"): + for event in events: + await handle_event(event) + + events_by_room = {} # type: Dict[str, List[EventBase]] + for event in events: + events_by_room.setdefault(event.room_id, []).append(event) + + await make_deferred_yieldable( + defer.gatherResults( + [ + run_in_background(handle_room_events, evs) + for evs in events_by_room.values() + ], + consumeErrors=True, + ) + ) await self.store.update_federation_out_pos("events", next_token) @@ -383,7 +361,7 @@ class FederationSender(AbstractFederationSender): events_processed_counter.inc(len(events)) event_processing_loop_room_count.labels("federation_sender").inc( - len({event.room_id for event in events}) + len(events_by_room) ) event_processing_loop_counter.labels("federation_sender").inc() @@ -395,53 +373,34 @@ class FederationSender(AbstractFederationSender): finally: self._is_processing = False - async def _distribute_events( - self, - events_and_dests: Iterable[Tuple[EventBase, Collection[str]]], - ) -> None: - """Distribute events to the respective per_destination queues. - - Also persists last-seen per-room stream_ordering to 'destination_rooms'. - - Args: - events_and_dests: A list of tuples, which are (event: EventBase, destinations: Collection[str]). - Every event is paired with its intended destinations (in federation). - """ - # Tuples of room_id + destination to their max-seen stream_ordering - room_with_dest_stream_ordering = {} # type: Dict[Tuple[str, str], int] - - # List of events to send to each destination - events_by_dest = {} # type: Dict[str, List[EventBase]] + async def _send_pdu(self, pdu: EventBase, destinations: Iterable[str]) -> None: + # We loop through all destinations to see whether we already have + # a transaction in progress. If we do, stick it in the pending_pdus + # table and we'll get back to it later. - # For each event-destinations pair... - for event, destinations in events_and_dests: + destinations = set(destinations) + destinations.discard(self.server_name) + logger.debug("Sending to: %s", str(destinations)) - # (we got this from the database, it's filled) - assert event.internal_metadata.stream_ordering - - sent_pdus_destination_dist_total.inc(len(destinations)) - sent_pdus_destination_dist_count.inc() + if not destinations: + return - # ...iterate over those destinations.. - for destination in destinations: - # ...update their stream-ordering... - room_with_dest_stream_ordering[(event.room_id, destination)] = max( - event.internal_metadata.stream_ordering, - room_with_dest_stream_ordering.get((event.room_id, destination), 0), - ) + sent_pdus_destination_dist_total.inc(len(destinations)) + sent_pdus_destination_dist_count.inc() - # ...and add the event to each destination queue. - events_by_dest.setdefault(destination, []).append(event) + assert pdu.internal_metadata.stream_ordering - # Bulk-store destination_rooms stream_ids - await self.store.bulk_store_destination_rooms_entries( - room_with_dest_stream_ordering + # track the fact that we have a PDU for these destinations, + # to allow us to perform catch-up later on if the remote is unreachable + # for a while. + await self.store.store_destination_rooms_entries( + destinations, + pdu.room_id, + pdu.internal_metadata.stream_ordering, ) - for destination, pdus in events_by_dest.items(): - logger.debug("Sending %d pdus to %s", len(pdus), destination) - - self._get_per_destination_queue(destination).send_pdus(pdus) + for destination in destinations: + self._get_per_destination_queue(destination).send_pdu(pdu) async def send_read_receipt(self, receipt: ReadReceipt) -> None: """Send a RR to any other servers in the room diff --git a/synapse/federation/sender/per_destination_queue.py b/synapse/federation/sender/per_destination_queue.py index 3bb66bce32..3b053ebcfb 100644 --- a/synapse/federation/sender/per_destination_queue.py +++ b/synapse/federation/sender/per_destination_queue.py @@ -154,22 +154,19 @@ class PerDestinationQueue: + len(self._pending_edus_keyed) ) - def send_pdus(self, pdus: Iterable[EventBase]) -> None: - """Add PDUs to the queue, and start the transmission loop if necessary + def send_pdu(self, pdu: EventBase) -> None: + """Add a PDU to the queue, and start the transmission loop if necessary Args: - pdus: pdus to send + pdu: pdu to send """ if not self._catching_up or self._last_successful_stream_ordering is None: # only enqueue the PDU if we are not catching up (False) or do not # yet know if we have anything to catch up (None) - self._pending_pdus.extend(pdus) + self._pending_pdus.append(pdu) else: - self._catchup_last_skipped = max( - pdu.internal_metadata.stream_ordering - for pdu in pdus - if pdu.internal_metadata.stream_ordering is not None - ) + assert pdu.internal_metadata.stream_ordering + self._catchup_last_skipped = pdu.internal_metadata.stream_ordering self.attempt_new_transaction() diff --git a/synapse/storage/databases/main/transactions.py b/synapse/storage/databases/main/transactions.py index b28ca61f80..82335e7a9d 100644 --- a/synapse/storage/databases/main/transactions.py +++ b/synapse/storage/databases/main/transactions.py @@ -14,7 +14,7 @@ import logging from collections import namedtuple -from typing import Dict, List, Optional, Tuple +from typing import Iterable, List, Optional, Tuple from canonicaljson import encode_canonical_json @@ -295,33 +295,37 @@ class TransactionStore(TransactionWorkerStore): }, ) - async def bulk_store_destination_rooms_entries( - self, room_and_destination_to_ordering: Dict[Tuple[str, str], int] - ): + async def store_destination_rooms_entries( + self, + destinations: Iterable[str], + room_id: str, + stream_ordering: int, + ) -> None: """ - Updates or creates `destination_rooms` entries for a number of events. + Updates or creates `destination_rooms` entries in batch for a single event. Args: - room_and_destination_to_ordering: A mapping of (room, destination) -> stream_id + destinations: list of destinations + room_id: the room_id of the event + stream_ordering: the stream_ordering of the event """ await self.db_pool.simple_upsert_many( table="destinations", key_names=("destination",), - key_values={(d,) for _, d in room_and_destination_to_ordering.keys()}, + key_values=[(d,) for d in destinations], value_names=[], value_values=[], desc="store_destination_rooms_entries_dests", ) + rows = [(destination, room_id) for destination in destinations] await self.db_pool.simple_upsert_many( table="destination_rooms", - key_names=("room_id", "destination"), - key_values=list(room_and_destination_to_ordering.keys()), + key_names=("destination", "room_id"), + key_values=rows, value_names=["stream_ordering"], - value_values=[ - (stream_id,) for stream_id in room_and_destination_to_ordering.values() - ], + value_values=[(stream_ordering,)] * len(rows), desc="store_destination_rooms_entries_rooms", ) -- cgit 1.5.1 From 787de3190f70d952b0d6589e9335aa16cacc41f2 Mon Sep 17 00:00:00 2001 From: Andrew Morgan Date: Wed, 28 Apr 2021 11:43:33 +0100 Subject: 1.33.0rc1 --- CHANGES.md | 53 ++++++++++++++++++++++++++++++++++++++++++++++++ changelog.d/9162.misc | 1 - changelog.d/9726.bugfix | 1 - changelog.d/9786.misc | 1 - changelog.d/9788.bugfix | 1 - changelog.d/9796.misc | 1 - changelog.d/9800.feature | 1 - changelog.d/9801.doc | 1 - changelog.d/9802.bugfix | 1 - changelog.d/9814.feature | 1 - changelog.d/9815.misc | 1 - changelog.d/9816.misc | 1 - changelog.d/9817.misc | 1 - changelog.d/9819.feature | 1 - changelog.d/9820.feature | 1 - changelog.d/9821.misc | 1 - changelog.d/9825.misc | 1 - changelog.d/9828.feature | 1 - changelog.d/9832.feature | 1 - changelog.d/9833.bugfix | 1 - changelog.d/9838.misc | 1 - changelog.d/9845.misc | 1 - changelog.d/9850.feature | 1 - changelog.d/9855.misc | 1 - changelog.d/9856.misc | 1 - changelog.d/9858.misc | 1 - changelog.d/9867.bugfix | 1 - changelog.d/9868.bugfix | 1 - changelog.d/9871.misc | 1 - changelog.d/9874.misc | 1 - changelog.d/9875.misc | 1 - changelog.d/9876.misc | 1 - changelog.d/9878.misc | 1 - changelog.d/9879.misc | 1 - changelog.d/9887.misc | 1 - synapse/__init__.py | 2 +- 36 files changed, 54 insertions(+), 35 deletions(-) delete mode 100644 changelog.d/9162.misc delete mode 100644 changelog.d/9726.bugfix delete mode 100644 changelog.d/9786.misc delete mode 100644 changelog.d/9788.bugfix delete mode 100644 changelog.d/9796.misc delete mode 100644 changelog.d/9800.feature delete mode 100644 changelog.d/9801.doc delete mode 100644 changelog.d/9802.bugfix delete mode 100644 changelog.d/9814.feature delete mode 100644 changelog.d/9815.misc delete mode 100644 changelog.d/9816.misc delete mode 100644 changelog.d/9817.misc delete mode 100644 changelog.d/9819.feature delete mode 100644 changelog.d/9820.feature delete mode 100644 changelog.d/9821.misc delete mode 100644 changelog.d/9825.misc delete mode 100644 changelog.d/9828.feature delete mode 100644 changelog.d/9832.feature delete mode 100644 changelog.d/9833.bugfix delete mode 100644 changelog.d/9838.misc delete mode 100644 changelog.d/9845.misc delete mode 100644 changelog.d/9850.feature delete mode 100644 changelog.d/9855.misc delete mode 100644 changelog.d/9856.misc delete mode 100644 changelog.d/9858.misc delete mode 100644 changelog.d/9867.bugfix delete mode 100644 changelog.d/9868.bugfix delete mode 100644 changelog.d/9871.misc delete mode 100644 changelog.d/9874.misc delete mode 100644 changelog.d/9875.misc delete mode 100644 changelog.d/9876.misc delete mode 100644 changelog.d/9878.misc delete mode 100644 changelog.d/9879.misc delete mode 100644 changelog.d/9887.misc diff --git a/CHANGES.md b/CHANGES.md index 532b30e232..a1f5376ff2 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -1,3 +1,56 @@ +Synapse 1.33.0rc1 (2021-04-28) +============================== + +Features +-------- + +- Update experimental support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083): restricting room access via group membership. ([\#9800](https://github.com/matrix-org/synapse/issues/9800), [\#9814](https://github.com/matrix-org/synapse/issues/9814)) +- Add experimental support for handling presence on a worker. ([\#9819](https://github.com/matrix-org/synapse/issues/9819), [\#9820](https://github.com/matrix-org/synapse/issues/9820), [\#9828](https://github.com/matrix-org/synapse/issues/9828), [\#9850](https://github.com/matrix-org/synapse/issues/9850)) +- Don't return an error when a user attempts to renew their account multiple times with the same token. Instead, state when their account is set to expire. This change concerns the optional account validity feature. ([\#9832](https://github.com/matrix-org/synapse/issues/9832)) + + +Bugfixes +-------- + +- Fixes the OIDC SSO flow when using a `public_baseurl` value including a non-root URL path. ([\#9726](https://github.com/matrix-org/synapse/issues/9726)) +- Fix thumbnail generation for some sites with non-standard content types. Contributed by @rkfg. ([\#9788](https://github.com/matrix-org/synapse/issues/9788)) +- Add some sanity checks to identity server passed to 3PID bind/unbind endpoints. ([\#9802](https://github.com/matrix-org/synapse/issues/9802)) +- Limit the size of HTTP responses read over federation. ([\#9833](https://github.com/matrix-org/synapse/issues/9833)) +- Fix a bug which could cause Synapse to get stuck in a loop of resyncing device lists. ([\#9867](https://github.com/matrix-org/synapse/issues/9867)) +- Fix a long-standing bug where errors from federation did not propagate to the client. ([\#9868](https://github.com/matrix-org/synapse/issues/9868)) + + +Improved Documentation +---------------------- + +- Add a note to the docker docs mentioning that we mirror upstream's supported Docker platforms. ([\#9801](https://github.com/matrix-org/synapse/issues/9801)) + + +Internal Changes +---------------- + +- Add a dockerfile for running Synapse in worker-mode under Complement. ([\#9162](https://github.com/matrix-org/synapse/issues/9162)) +- Apply `pyupgrade` across the codebase. ([\#9786](https://github.com/matrix-org/synapse/issues/9786)) +- Move some replication processing out of `generic_worker`. ([\#9796](https://github.com/matrix-org/synapse/issues/9796)) +- Replace `HomeServer.get_config()` with inline references. ([\#9815](https://github.com/matrix-org/synapse/issues/9815)) +- Rename some handlers and config modules to not duplicate the top-level module. ([\#9816](https://github.com/matrix-org/synapse/issues/9816)) +- Fix a long-standing bug which caused `max_upload_size` to not be correctly enforced. ([\#9817](https://github.com/matrix-org/synapse/issues/9817)) +- Reduce CPU usage of the user directory by reusing existing calculated room membership. ([\#9821](https://github.com/matrix-org/synapse/issues/9821)) +- Small speed up for joining large remote rooms. ([\#9825](https://github.com/matrix-org/synapse/issues/9825)) +- Introduce flake8-bugbear to the test suite and fix some of its lint violations. ([\#9838](https://github.com/matrix-org/synapse/issues/9838)) +- Only store the raw data in the in-memory caches, rather than objects that include references to e.g. the data stores. ([\#9845](https://github.com/matrix-org/synapse/issues/9845)) +- Limit length of accepted email addresses. ([\#9855](https://github.com/matrix-org/synapse/issues/9855)) +- Remove redundant `synapse.types.Collection` type definition. ([\#9856](https://github.com/matrix-org/synapse/issues/9856)) +- Handle recently added rate limits correctly when using `--no-rate-limit` with the demo scripts. ([\#9858](https://github.com/matrix-org/synapse/issues/9858)) +- Disable invite rate-limiting by default when running the unit tests. ([\#9871](https://github.com/matrix-org/synapse/issues/9871)) +- Pass a reactor into `SynapseSite` to make testing easier. ([\#9874](https://github.com/matrix-org/synapse/issues/9874)) +- Make `DomainSpecificString` an `attrs` class. ([\#9875](https://github.com/matrix-org/synapse/issues/9875)) +- Add type hints to `synapse.api.auth` and `synapse.api.auth_blocking` modules. ([\#9876](https://github.com/matrix-org/synapse/issues/9876)) +- Remove redundant `_PushHTTPChannel` test class. ([\#9878](https://github.com/matrix-org/synapse/issues/9878)) +- Remove backwards-compatibility code for Python versions < 3.6. ([\#9879](https://github.com/matrix-org/synapse/issues/9879)) +- Small performance improvement around handling new local presence updates. ([\#9887](https://github.com/matrix-org/synapse/issues/9887)) + + Synapse 1.32.2 (2021-04-22) =========================== diff --git a/changelog.d/9162.misc b/changelog.d/9162.misc deleted file mode 100644 index 1083da8a7a..0000000000 --- a/changelog.d/9162.misc +++ /dev/null @@ -1 +0,0 @@ -Add a dockerfile for running Synapse in worker-mode under Complement. \ No newline at end of file diff --git a/changelog.d/9726.bugfix b/changelog.d/9726.bugfix deleted file mode 100644 index 4ba0b24327..0000000000 --- a/changelog.d/9726.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fixes the OIDC SSO flow when using a `public_baseurl` value including a non-root URL path. \ No newline at end of file diff --git a/changelog.d/9786.misc b/changelog.d/9786.misc deleted file mode 100644 index cf265db749..0000000000 --- a/changelog.d/9786.misc +++ /dev/null @@ -1 +0,0 @@ -Apply `pyupgrade` across the codebase. \ No newline at end of file diff --git a/changelog.d/9788.bugfix b/changelog.d/9788.bugfix deleted file mode 100644 index edb58fbd5b..0000000000 --- a/changelog.d/9788.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fix thumbnail generation for some sites with non-standard content types. Contributed by @rkfg. diff --git a/changelog.d/9796.misc b/changelog.d/9796.misc deleted file mode 100644 index 59bb1813c3..0000000000 --- a/changelog.d/9796.misc +++ /dev/null @@ -1 +0,0 @@ -Move some replication processing out of `generic_worker`. diff --git a/changelog.d/9800.feature b/changelog.d/9800.feature deleted file mode 100644 index 9404ad2fc0..0000000000 --- a/changelog.d/9800.feature +++ /dev/null @@ -1 +0,0 @@ -Update experimental support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083): restricting room access via group membership. diff --git a/changelog.d/9801.doc b/changelog.d/9801.doc deleted file mode 100644 index 8b8b9d01d4..0000000000 --- a/changelog.d/9801.doc +++ /dev/null @@ -1 +0,0 @@ -Add a note to the docker docs mentioning that we mirror upstream's supported Docker platforms. diff --git a/changelog.d/9802.bugfix b/changelog.d/9802.bugfix deleted file mode 100644 index 0c72f7be47..0000000000 --- a/changelog.d/9802.bugfix +++ /dev/null @@ -1 +0,0 @@ -Add some sanity checks to identity server passed to 3PID bind/unbind endpoints. diff --git a/changelog.d/9814.feature b/changelog.d/9814.feature deleted file mode 100644 index 9404ad2fc0..0000000000 --- a/changelog.d/9814.feature +++ /dev/null @@ -1 +0,0 @@ -Update experimental support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083): restricting room access via group membership. diff --git a/changelog.d/9815.misc b/changelog.d/9815.misc deleted file mode 100644 index e33d012d3d..0000000000 --- a/changelog.d/9815.misc +++ /dev/null @@ -1 +0,0 @@ -Replace `HomeServer.get_config()` with inline references. diff --git a/changelog.d/9816.misc b/changelog.d/9816.misc deleted file mode 100644 index d098122500..0000000000 --- a/changelog.d/9816.misc +++ /dev/null @@ -1 +0,0 @@ -Rename some handlers and config modules to not duplicate the top-level module. diff --git a/changelog.d/9817.misc b/changelog.d/9817.misc deleted file mode 100644 index 8aa8895f05..0000000000 --- a/changelog.d/9817.misc +++ /dev/null @@ -1 +0,0 @@ -Fix a long-standing bug which caused `max_upload_size` to not be correctly enforced. diff --git a/changelog.d/9819.feature b/changelog.d/9819.feature deleted file mode 100644 index f56b0bb3bd..0000000000 --- a/changelog.d/9819.feature +++ /dev/null @@ -1 +0,0 @@ -Add experimental support for handling presence on a worker. diff --git a/changelog.d/9820.feature b/changelog.d/9820.feature deleted file mode 100644 index f56b0bb3bd..0000000000 --- a/changelog.d/9820.feature +++ /dev/null @@ -1 +0,0 @@ -Add experimental support for handling presence on a worker. diff --git a/changelog.d/9821.misc b/changelog.d/9821.misc deleted file mode 100644 index 03b2d2ed4d..0000000000 --- a/changelog.d/9821.misc +++ /dev/null @@ -1 +0,0 @@ -Reduce CPU usage of the user directory by reusing existing calculated room membership. \ No newline at end of file diff --git a/changelog.d/9825.misc b/changelog.d/9825.misc deleted file mode 100644 index 42f3f15619..0000000000 --- a/changelog.d/9825.misc +++ /dev/null @@ -1 +0,0 @@ -Small speed up for joining large remote rooms. diff --git a/changelog.d/9828.feature b/changelog.d/9828.feature deleted file mode 100644 index f56b0bb3bd..0000000000 --- a/changelog.d/9828.feature +++ /dev/null @@ -1 +0,0 @@ -Add experimental support for handling presence on a worker. diff --git a/changelog.d/9832.feature b/changelog.d/9832.feature deleted file mode 100644 index e76395fbe8..0000000000 --- a/changelog.d/9832.feature +++ /dev/null @@ -1 +0,0 @@ -Don't return an error when a user attempts to renew their account multiple times with the same token. Instead, state when their account is set to expire. This change concerns the optional account validity feature. \ No newline at end of file diff --git a/changelog.d/9833.bugfix b/changelog.d/9833.bugfix deleted file mode 100644 index 56f9c9626b..0000000000 --- a/changelog.d/9833.bugfix +++ /dev/null @@ -1 +0,0 @@ -Limit the size of HTTP responses read over federation. diff --git a/changelog.d/9838.misc b/changelog.d/9838.misc deleted file mode 100644 index b98ce56309..0000000000 --- a/changelog.d/9838.misc +++ /dev/null @@ -1 +0,0 @@ -Introduce flake8-bugbear to the test suite and fix some of its lint violations. \ No newline at end of file diff --git a/changelog.d/9845.misc b/changelog.d/9845.misc deleted file mode 100644 index 875dd6d131..0000000000 --- a/changelog.d/9845.misc +++ /dev/null @@ -1 +0,0 @@ -Only store the raw data in the in-memory caches, rather than objects that include references to e.g. the data stores. diff --git a/changelog.d/9850.feature b/changelog.d/9850.feature deleted file mode 100644 index f56b0bb3bd..0000000000 --- a/changelog.d/9850.feature +++ /dev/null @@ -1 +0,0 @@ -Add experimental support for handling presence on a worker. diff --git a/changelog.d/9855.misc b/changelog.d/9855.misc deleted file mode 100644 index 6a3d700fde..0000000000 --- a/changelog.d/9855.misc +++ /dev/null @@ -1 +0,0 @@ -Limit length of accepted email addresses. diff --git a/changelog.d/9856.misc b/changelog.d/9856.misc deleted file mode 100644 index d67e8c386a..0000000000 --- a/changelog.d/9856.misc +++ /dev/null @@ -1 +0,0 @@ -Remove redundant `synapse.types.Collection` type definition. diff --git a/changelog.d/9858.misc b/changelog.d/9858.misc deleted file mode 100644 index f7e286fa69..0000000000 --- a/changelog.d/9858.misc +++ /dev/null @@ -1 +0,0 @@ -Handle recently added rate limits correctly when using `--no-rate-limit` with the demo scripts. diff --git a/changelog.d/9867.bugfix b/changelog.d/9867.bugfix deleted file mode 100644 index f236de247d..0000000000 --- a/changelog.d/9867.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fix a bug which could cause Synapse to get stuck in a loop of resyncing device lists. diff --git a/changelog.d/9868.bugfix b/changelog.d/9868.bugfix deleted file mode 100644 index e2b4f97ad5..0000000000 --- a/changelog.d/9868.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fix a long-standing bug where errors from federation did not propagate to the client. diff --git a/changelog.d/9871.misc b/changelog.d/9871.misc deleted file mode 100644 index b19acfab62..0000000000 --- a/changelog.d/9871.misc +++ /dev/null @@ -1 +0,0 @@ -Disable invite rate-limiting by default when running the unit tests. \ No newline at end of file diff --git a/changelog.d/9874.misc b/changelog.d/9874.misc deleted file mode 100644 index ba1097e65e..0000000000 --- a/changelog.d/9874.misc +++ /dev/null @@ -1 +0,0 @@ -Pass a reactor into `SynapseSite` to make testing easier. diff --git a/changelog.d/9875.misc b/changelog.d/9875.misc deleted file mode 100644 index 9345c0bf45..0000000000 --- a/changelog.d/9875.misc +++ /dev/null @@ -1 +0,0 @@ -Make `DomainSpecificString` an `attrs` class. diff --git a/changelog.d/9876.misc b/changelog.d/9876.misc deleted file mode 100644 index 28390e32e6..0000000000 --- a/changelog.d/9876.misc +++ /dev/null @@ -1 +0,0 @@ -Add type hints to `synapse.api.auth` and `synapse.api.auth_blocking` modules. diff --git a/changelog.d/9878.misc b/changelog.d/9878.misc deleted file mode 100644 index 927876852d..0000000000 --- a/changelog.d/9878.misc +++ /dev/null @@ -1 +0,0 @@ -Remove redundant `_PushHTTPChannel` test class. diff --git a/changelog.d/9879.misc b/changelog.d/9879.misc deleted file mode 100644 index c9ca37cf48..0000000000 --- a/changelog.d/9879.misc +++ /dev/null @@ -1 +0,0 @@ -Remove backwards-compatibility code for Python versions < 3.6. \ No newline at end of file diff --git a/changelog.d/9887.misc b/changelog.d/9887.misc deleted file mode 100644 index 650ebf85e6..0000000000 --- a/changelog.d/9887.misc +++ /dev/null @@ -1 +0,0 @@ -Small performance improvement around handling new local presence updates. diff --git a/synapse/__init__.py b/synapse/__init__.py index fbd49a93e1..5bbaa62de2 100644 --- a/synapse/__init__.py +++ b/synapse/__init__.py @@ -47,7 +47,7 @@ try: except ImportError: pass -__version__ = "1.32.2" +__version__ = "1.33.0rc1" if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): # We import here so that we don't have to install a bunch of deps when -- cgit 1.5.1 From 8ba086980dbe4272a6ad2f529ae7b955b93bb9b0 Mon Sep 17 00:00:00 2001 From: Andrew Morgan Date: Wed, 28 Apr 2021 12:07:49 +0100 Subject: Reword account validity template change to sound less like a bugfix --- CHANGES.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CHANGES.md b/CHANGES.md index a1f5376ff2..9a41607679 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -6,7 +6,7 @@ Features - Update experimental support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083): restricting room access via group membership. ([\#9800](https://github.com/matrix-org/synapse/issues/9800), [\#9814](https://github.com/matrix-org/synapse/issues/9814)) - Add experimental support for handling presence on a worker. ([\#9819](https://github.com/matrix-org/synapse/issues/9819), [\#9820](https://github.com/matrix-org/synapse/issues/9820), [\#9828](https://github.com/matrix-org/synapse/issues/9828), [\#9850](https://github.com/matrix-org/synapse/issues/9850)) -- Don't return an error when a user attempts to renew their account multiple times with the same token. Instead, state when their account is set to expire. This change concerns the optional account validity feature. ([\#9832](https://github.com/matrix-org/synapse/issues/9832)) +- Return a new template when an user attempts to renew their account multiple times with the same token, stating that their account is set to expire. This replaces the invalid token template that would previously be shown in this case. This change concerns the optional account validity feature. ([\#9832](https://github.com/matrix-org/synapse/issues/9832)) Bugfixes -- cgit 1.5.1 From e4ab8676b4b5a3336ef49bb68a0e6dabbf030df4 Mon Sep 17 00:00:00 2001 From: Erik Johnston Date: Wed, 28 Apr 2021 14:42:50 +0100 Subject: Fix tight loop handling presence replication. (#9900) Only affects workers. Introduced in #9819. Fixes #9899. --- changelog.d/9900.bugfix | 1 + synapse/handlers/presence.py | 24 +++++++++++++++++++++++- tests/handlers/test_presence.py | 22 ++++++++++++++++++++++ 3 files changed, 46 insertions(+), 1 deletion(-) create mode 100644 changelog.d/9900.bugfix diff --git a/changelog.d/9900.bugfix b/changelog.d/9900.bugfix new file mode 100644 index 0000000000..a8470fca3f --- /dev/null +++ b/changelog.d/9900.bugfix @@ -0,0 +1 @@ +Fix tight loop handling presence replication when using workers. Introduced in v1.33.0rc1. diff --git a/synapse/handlers/presence.py b/synapse/handlers/presence.py index 969c73c1e7..12df35f26e 100644 --- a/synapse/handlers/presence.py +++ b/synapse/handlers/presence.py @@ -2026,18 +2026,40 @@ class PresenceFederationQueue: ) return result["updates"], result["upto_token"], result["limited"] + # If the from_token is the current token then there's nothing to return + # and we can trivially no-op. + if from_token == self._next_id - 1: + return [], upto_token, False + # We can find the correct position in the queue by noting that there is # exactly one entry per stream ID, and that the last entry has an ID of # `self._next_id - 1`, so we can count backwards from the end. # + # Since we are returning all states in the range `from_token < stream_id + # <= upto_token` we look for the index with a `stream_id` of `from_token + # + 1`. + # # Since the start of the queue is periodically truncated we need to # handle the case where `from_token` stream ID has already been dropped. - start_idx = max(from_token - self._next_id, -len(self._queue)) + start_idx = max(from_token + 1 - self._next_id, -len(self._queue)) to_send = [] # type: List[Tuple[int, Tuple[str, str]]] limited = False new_id = upto_token for _, stream_id, destinations, user_ids in self._queue[start_idx:]: + if stream_id <= from_token: + # Paranoia check that we are actually only sending states that + # are have stream_id strictly greater than from_token. We should + # never hit this. + logger.warning( + "Tried returning presence federation stream ID: %d less than from_token: %d (next_id: %d, len: %d)", + stream_id, + from_token, + self._next_id, + len(self._queue), + ) + continue + if stream_id > upto_token: break diff --git a/tests/handlers/test_presence.py b/tests/handlers/test_presence.py index 61271cd084..ce330e79cc 100644 --- a/tests/handlers/test_presence.py +++ b/tests/handlers/test_presence.py @@ -509,6 +509,14 @@ class PresenceFederationQueueTestCase(unittest.HomeserverTestCase): self.assertCountEqual(rows, expected_rows) + now_token = self.queue.get_current_token(self.instance_name) + rows, upto_token, limited = self.get_success( + self.queue.get_replication_rows("master", upto_token, now_token, 10) + ) + self.assertEqual(upto_token, now_token) + self.assertFalse(limited) + self.assertCountEqual(rows, []) + def test_send_and_get_split(self): state1 = UserPresenceState.default("@user1:test") state2 = UserPresenceState.default("@user2:test") @@ -538,6 +546,20 @@ class PresenceFederationQueueTestCase(unittest.HomeserverTestCase): self.assertCountEqual(rows, expected_rows) + now_token = self.queue.get_current_token(self.instance_name) + rows, upto_token, limited = self.get_success( + self.queue.get_replication_rows("master", upto_token, now_token, 10) + ) + + self.assertEqual(upto_token, now_token) + self.assertFalse(limited) + + expected_rows = [ + (2, ("dest3", "@user3:test")), + ] + + self.assertCountEqual(rows, expected_rows) + def test_clear_queue_all(self): state1 = UserPresenceState.default("@user1:test") state2 = UserPresenceState.default("@user2:test") -- cgit 1.5.1 From e9444cc74d73f6367dedcfe406e3f1d9ff3d5414 Mon Sep 17 00:00:00 2001 From: Andrew Morgan Date: Thu, 29 Apr 2021 11:45:37 +0100 Subject: 1.33.0rc2 --- CHANGES.md | 9 +++++++++ changelog.d/9900.bugfix | 1 - synapse/__init__.py | 2 +- 3 files changed, 10 insertions(+), 2 deletions(-) delete mode 100644 changelog.d/9900.bugfix diff --git a/CHANGES.md b/CHANGES.md index 9a41607679..629d4a180d 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -1,3 +1,12 @@ +Synapse 1.33.0rc2 (2021-04-29) +============================== + +Bugfixes +-------- + +- Fix tight loop handling presence replication when using workers. Introduced in v1.33.0rc1. ([\#9900](https://github.com/matrix-org/synapse/issues/9900)) + + Synapse 1.33.0rc1 (2021-04-28) ============================== diff --git a/changelog.d/9900.bugfix b/changelog.d/9900.bugfix deleted file mode 100644 index a8470fca3f..0000000000 --- a/changelog.d/9900.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fix tight loop handling presence replication when using workers. Introduced in v1.33.0rc1. diff --git a/synapse/__init__.py b/synapse/__init__.py index 5bbaa62de2..319c52be2c 100644 --- a/synapse/__init__.py +++ b/synapse/__init__.py @@ -47,7 +47,7 @@ try: except ImportError: pass -__version__ = "1.33.0rc1" +__version__ = "1.33.0rc2" if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): # We import here so that we don't have to install a bunch of deps when -- cgit 1.5.1 From d11f2dfee519a4136def4169cef0ef218ebf1e19 Mon Sep 17 00:00:00 2001 From: Andrew Morgan Date: Thu, 29 Apr 2021 14:31:14 +0100 Subject: typo in changelog --- CHANGES.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CHANGES.md b/CHANGES.md index 629d4a180d..bdeb614b9e 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -4,7 +4,7 @@ Synapse 1.33.0rc2 (2021-04-29) Bugfixes -------- -- Fix tight loop handling presence replication when using workers. Introduced in v1.33.0rc1. ([\#9900](https://github.com/matrix-org/synapse/issues/9900)) +- Fix tight loop when handling presence replication when using workers. Introduced in v1.33.0rc1. ([\#9900](https://github.com/matrix-org/synapse/issues/9900)) Synapse 1.33.0rc1 (2021-04-28) -- cgit 1.5.1 From 56c4b47df36a99d2fc57a64013a4f482e25ee097 Mon Sep 17 00:00:00 2001 From: Dan Callahan Date: Fri, 30 Apr 2021 15:36:05 +0100 Subject: Build Debian packages for Ubuntu 21.04 Hirsute (#9909) Signed-off-by: Dan Callahan --- changelog.d/9909.feature | 1 + scripts-dev/build_debian_packages | 7 ++++--- 2 files changed, 5 insertions(+), 3 deletions(-) create mode 100644 changelog.d/9909.feature diff --git a/changelog.d/9909.feature b/changelog.d/9909.feature new file mode 100644 index 0000000000..41aba5cca6 --- /dev/null +++ b/changelog.d/9909.feature @@ -0,0 +1 @@ +Build Debian packages for Ubuntu 21.04 (Hirsute Hippo). diff --git a/scripts-dev/build_debian_packages b/scripts-dev/build_debian_packages index 3bb6e2c7ea..07d018db99 100755 --- a/scripts-dev/build_debian_packages +++ b/scripts-dev/build_debian_packages @@ -21,9 +21,10 @@ DISTS = ( "debian:buster", "debian:bullseye", "debian:sid", - "ubuntu:bionic", - "ubuntu:focal", - "ubuntu:groovy", + "ubuntu:bionic", # 18.04 LTS (our EOL forced by Py36 on 2021-12-23) + "ubuntu:focal", # 20.04 LTS (our EOL forced by Py38 on 2024-10-14) + "ubuntu:groovy", # 20.10 (EOL 2021-07-07) + "ubuntu:hirsute", # 21.04 (EOL 2022-01-05) ) DESC = '''\ -- cgit 1.5.1 From 0644ac0989d06fc0ba5d08a5412d65d75f7d8db2 Mon Sep 17 00:00:00 2001 From: Brendan Abolivier Date: Wed, 5 May 2021 14:15:54 +0100 Subject: 1.33.0 --- CHANGES.md | 9 +++++++++ changelog.d/9909.feature | 1 - debian/changelog | 6 ++++++ synapse/__init__.py | 2 +- 4 files changed, 16 insertions(+), 2 deletions(-) delete mode 100644 changelog.d/9909.feature diff --git a/CHANGES.md b/CHANGES.md index bdeb614b9e..206859cff7 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -1,3 +1,12 @@ +Synapse 1.33.0 (2021-05-05) +=========================== + +Features +-------- + +- Build Debian packages for Ubuntu 21.04 (Hirsute Hippo). ([\#9909](https://github.com/matrix-org/synapse/issues/9909)) + + Synapse 1.33.0rc2 (2021-04-29) ============================== diff --git a/changelog.d/9909.feature b/changelog.d/9909.feature deleted file mode 100644 index 41aba5cca6..0000000000 --- a/changelog.d/9909.feature +++ /dev/null @@ -1 +0,0 @@ -Build Debian packages for Ubuntu 21.04 (Hirsute Hippo). diff --git a/debian/changelog b/debian/changelog index fd33bfda5c..b54d3f2bc6 100644 --- a/debian/changelog +++ b/debian/changelog @@ -1,3 +1,9 @@ +matrix-synapse-py3 (1.33.0) stable; urgency=medium + + * New synapse release 1.33.0. + + -- Synapse Packaging team Wed, 05 May 2021 14:15:27 +0100 + matrix-synapse-py3 (1.32.2) stable; urgency=medium * New synapse release 1.32.2. diff --git a/synapse/__init__.py b/synapse/__init__.py index 319c52be2c..5eac40730a 100644 --- a/synapse/__init__.py +++ b/synapse/__init__.py @@ -47,7 +47,7 @@ try: except ImportError: pass -__version__ = "1.33.0rc2" +__version__ = "1.33.0" if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): # We import here so that we don't have to install a bunch of deps when -- cgit 1.5.1