diff options
author | David Robertson <davidr@element.io> | 2022-04-11 12:55:23 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2022-04-11 12:55:23 +0100 |
commit | fb611f1899665f725f11e7fc51f49c6a06fd6eef (patch) | |
tree | 0957cfebb90eed9a86d8f3962452c48465270fa8 | |
parent | Merge branch 'develop' into dmr/pyproject-poetry (diff) | |
parent | Disallow untyped defs in synapse._scripts (#12422) (diff) | |
download | synapse-fb611f1899665f725f11e7fc51f49c6a06fd6eef.tar.xz |
Merge branch 'develop' into dmr/pyproject-poetry
31 files changed, 907 insertions, 193 deletions
diff --git a/.ci/scripts/test_export_data_command.sh b/.ci/scripts/test_export_data_command.sh index 3f9d72ea1e..033fd3e24e 100755 --- a/.ci/scripts/test_export_data_command.sh +++ b/.ci/scripts/test_export_data_command.sh @@ -2,7 +2,8 @@ # Test for the export-data admin command against sqlite and postgres -# Expects Synapse to have been already installed with `poetry install -e .[postgres] +# Expects Synapse to have been already installed with `poetry install --extras postgres`. +# Expects `poetry` to be available on the `PATH`. set -xe cd "$(dirname "$0")/../.." diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index d9cb383e85..748fe0c3d1 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -355,27 +355,11 @@ jobs: (wget -O - "https://github.com/matrix-org/complement/archive/$BRANCH_NAME.tar.gz" | tar -xz --strip-components=1 -C complement) && break done - # Build initial Synapse image - - run: docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile . - working-directory: synapse - env: - DOCKER_BUILDKIT: 1 - - # Build a ready-to-run Synapse image based on the initial image above. - # This new image includes a config file, keys for signing and TLS, and - # other settings to make it suitable for testing under Complement. - - run: docker build -t complement-synapse -f Synapse.Dockerfile . - working-directory: complement/dockerfiles - - # Run Complement - run: | set -o pipefail - go test -v -json -tags synapse_blacklist,msc2716,msc3030 ./tests/... 2>&1 | gotestfmt + COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -json 2>&1 | gotestfmt shell: bash name: Run Complement Tests - env: - COMPLEMENT_BASE_IMAGE: complement-synapse:latest - working-directory: complement # a job which marks all the other jobs as complete, thus allowing PRs to be merged. tests-done: diff --git a/changelog.d/12391.feature b/changelog.d/12391.feature new file mode 100644 index 0000000000..9a064ec8be --- /dev/null +++ b/changelog.d/12391.feature @@ -0,0 +1 @@ +Add a module API for reading and writing global account data. diff --git a/changelog.d/12404.misc b/changelog.d/12404.misc new file mode 100644 index 0000000000..00100fdfef --- /dev/null +++ b/changelog.d/12404.misc @@ -0,0 +1 @@ +Add files used to build the Docker image used for complement testing into the Synapse repository. diff --git a/changelog.d/12418.misc b/changelog.d/12418.misc new file mode 100644 index 0000000000..d4b333ce81 --- /dev/null +++ b/changelog.d/12418.misc @@ -0,0 +1 @@ +Run the CI export-data script in the locked poetry environment. diff --git a/changelog.d/12422.misc b/changelog.d/12422.misc new file mode 100644 index 0000000000..3a7cbc34e7 --- /dev/null +++ b/changelog.d/12422.misc @@ -0,0 +1 @@ +Make `synapse._scripts` pass type checks. diff --git a/docker/README-testing.md b/docker/README-testing.md index b010509275..c38cae7530 100644 --- a/docker/README-testing.md +++ b/docker/README-testing.md @@ -10,10 +10,10 @@ Note that running Synapse's unit tests from within the docker image is not suppo ## Testing with SQLite and single-process Synapse -> Note that `scripts-dev/complement.sh` is a script that will automatically build +> Note that `scripts-dev/complement.sh` is a script that will automatically build > and run an SQLite-based, single-process of Synapse against Complement. -The instructions below will set up Complement testing for a single-process, +The instructions below will set up Complement testing for a single-process, SQLite-based Synapse deployment. Start by building the base Synapse docker image. If you wish to run tests with the latest @@ -26,23 +26,22 @@ docker build -t matrixdotorg/synapse -f docker/Dockerfile . This will build an image with the tag `matrixdotorg/synapse`. -Next, build the Synapse image for Complement. You will need a local checkout -of Complement. Change to the root of your Complement checkout and run: +Next, build the Synapse image for Complement. ```sh -docker build -t complement-synapse -f "dockerfiles/Synapse.Dockerfile" dockerfiles +docker build -t complement-synapse -f "docker/complement/Dockerfile" docker/complement ``` -This will build an image with the tag `complement-synapse`, which can be handed to -Complement for testing via the `COMPLEMENT_BASE_IMAGE` environment variable. Refer to -[Complement's documentation](https://github.com/matrix-org/complement/#running) for +This will build an image with the tag `complement-synapse`, which can be handed to +Complement for testing via the `COMPLEMENT_BASE_IMAGE` environment variable. Refer to +[Complement's documentation](https://github.com/matrix-org/complement/#running) for how to run the tests, as well as the various available command line flags. ## Testing with PostgreSQL and single or multi-process Synapse -The above docker image only supports running Synapse with SQLite and in a -single-process topology. The following instructions are used to build a Synapse image for -Complement that supports either single or multi-process topology with a PostgreSQL +The above docker image only supports running Synapse with SQLite and in a +single-process topology. The following instructions are used to build a Synapse image for +Complement that supports either single or multi-process topology with a PostgreSQL database backend. As with the single-process image, build the base Synapse docker image. If you wish to run @@ -55,7 +54,7 @@ docker build -t matrixdotorg/synapse -f docker/Dockerfile . This will build an image with the tag `matrixdotorg/synapse`. -Next, we build a new image with worker support based on `matrixdotorg/synapse:latest`. +Next, we build a new image with worker support based on `matrixdotorg/synapse:latest`. Again, from the root of the repository: ```sh @@ -64,18 +63,17 @@ docker build -t matrixdotorg/synapse-workers -f docker/Dockerfile-workers . This will build an image with the tag` matrixdotorg/synapse-workers`. -It's worth noting at this point that this image is fully functional, and -can be used for testing against locally. See instructions for using the container +It's worth noting at this point that this image is fully functional, and +can be used for testing against locally. See instructions for using the container under [Running the Dockerfile-worker image standalone](#running-the-dockerfile-worker-image-standalone) below. Finally, build the Synapse image for Complement, which is based on -`matrixdotorg/synapse-workers`. You will need a local checkout of Complement. Change to -the root of your Complement checkout and run: +`matrixdotorg/synapse-workers`. ```sh -docker build -t matrixdotorg/complement-synapse-workers -f dockerfiles/SynapseWorkers.Dockerfile dockerfiles +docker build -t matrixdotorg/complement-synapse-workers -f docker/complement/SynapseWorkers.Dockerfile docker/complement ``` This will build an image with the tag `complement-synapse-workers`, which can be handed to @@ -91,10 +89,10 @@ bundling all necessary components together for a workerised homeserver instance. This includes any desired Synapse worker processes, a nginx to route traffic accordingly, a redis for worker communication and a supervisord instance to start up and monitor all -processes. You will need to provide your own postgres container to connect to, and TLS +processes. You will need to provide your own postgres container to connect to, and TLS is not handled by the container. -Once you've built the image using the above instructions, you can run it. Be sure +Once you've built the image using the above instructions, you can run it. Be sure you've set up a volume according to the [usual Synapse docker instructions](README.md). Then run something along the lines of: @@ -112,7 +110,7 @@ docker run -d --name synapse \ matrixdotorg/synapse-workers ``` -...substituting `POSTGRES*` variables for those that match a postgres host you have +...substituting `POSTGRES*` variables for those that match a postgres host you have available (usually a running postgres docker container). The `SYNAPSE_WORKER_TYPES` environment variable is a comma-separated list of workers to @@ -130,11 +128,11 @@ Otherwise, `SYNAPSE_WORKER_TYPES` can either be left empty or unset to spawn no (leaving only the main process). The container is configured to use redis-based worker mode. -Logs for workers and the main process are logged to stdout and can be viewed with -standard `docker logs` tooling. Worker logs contain their worker name +Logs for workers and the main process are logged to stdout and can be viewed with +standard `docker logs` tooling. Worker logs contain their worker name after the timestamp. Setting `SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK=1` will cause worker logs to be written to `<data_dir>/logs/<worker_name>.log`. Logs are kept for 1 week and rotate every day at 00: -00, according to the container's clock. Logging for the main process must still be +00, according to the container's clock. Logging for the main process must still be configured by modifying the homeserver's log config in your Synapse data volume. diff --git a/docker/complement/Dockerfile b/docker/complement/Dockerfile new file mode 100644 index 0000000000..4823ce7364 --- /dev/null +++ b/docker/complement/Dockerfile @@ -0,0 +1,22 @@ +# A dockerfile which builds an image suitable for testing Synapse under +# complement. + +ARG SYNAPSE_VERSION=latest + +FROM matrixdotorg/synapse:${SYNAPSE_VERSION} + +ENV SERVER_NAME=localhost + +COPY conf/* /conf/ + +# generate a signing key +RUN generate_signing_key -o /conf/server.signing.key + +WORKDIR /data + +EXPOSE 8008 8448 + +ENTRYPOINT ["/conf/start.sh"] + +HEALTHCHECK --start-period=5s --interval=1s --timeout=1s \ + CMD curl -fSs http://localhost:8008/health || exit 1 diff --git a/docker/complement/README.md b/docker/complement/README.md new file mode 100644 index 0000000000..e075418e4a --- /dev/null +++ b/docker/complement/README.md @@ -0,0 +1 @@ +Stuff for building the docker image used for testing under complement. diff --git a/docker/complement/SynapseWorkers.Dockerfile b/docker/complement/SynapseWorkers.Dockerfile new file mode 100644 index 0000000000..982219a91e --- /dev/null +++ b/docker/complement/SynapseWorkers.Dockerfile @@ -0,0 +1,73 @@ +# This dockerfile builds on top of 'docker/Dockerfile-worker' in matrix-org/synapse +# by including a built-in postgres instance, as well as setting up the homeserver so +# that it is ready for testing via Complement. +# +# Instructions for building this image from those it depends on is detailed in this guide: +# https://github.com/matrix-org/synapse/blob/develop/docker/README-testing.md#testing-with-postgresql-and-single-or-multi-process-synapse +FROM matrixdotorg/synapse-workers + +# Download a caddy server to stand in front of nginx and terminate TLS using Complement's +# custom CA. +# We include this near the top of the file in order to cache the result. +RUN curl -OL "https://github.com/caddyserver/caddy/releases/download/v2.3.0/caddy_2.3.0_linux_amd64.tar.gz" && \ + tar xzf caddy_2.3.0_linux_amd64.tar.gz && rm caddy_2.3.0_linux_amd64.tar.gz && mv caddy /root + +# Install postgresql +RUN apt-get update +RUN apt-get install -y postgresql + +# Configure a user and create a database for Synapse +RUN pg_ctlcluster 13 main start && su postgres -c "echo \ + \"ALTER USER postgres PASSWORD 'somesecret'; \ + CREATE DATABASE synapse \ + ENCODING 'UTF8' \ + LC_COLLATE='C' \ + LC_CTYPE='C' \ + template=template0;\" | psql" && pg_ctlcluster 13 main stop + +# Modify the shared homeserver config with postgres support, certificate setup +# and the disabling of rate-limiting +COPY conf-workers/workers-shared.yaml /conf/workers/shared.yaml + +WORKDIR /data + +# Copy the caddy config +COPY conf-workers/caddy.complement.json /root/caddy.json + +# Expose caddy's listener ports +EXPOSE 8008 8448 + +ENTRYPOINT \ + # Replace the server name in the caddy config + sed -i "s/{{ server_name }}/${SERVER_NAME}/g" /root/caddy.json && \ + # Start postgres + pg_ctlcluster 13 main start 2>&1 && \ + # Start caddy + /root/caddy start --config /root/caddy.json 2>&1 && \ + # Set the server name of the homeserver + SYNAPSE_SERVER_NAME=${SERVER_NAME} \ + # No need to report stats here + SYNAPSE_REPORT_STATS=no \ + # Set postgres authentication details which will be placed in the homeserver config file + POSTGRES_PASSWORD=somesecret POSTGRES_USER=postgres POSTGRES_HOST=localhost \ + # Specify the workers to test with + SYNAPSE_WORKER_TYPES="\ + event_persister, \ + event_persister, \ + background_worker, \ + frontend_proxy, \ + event_creator, \ + user_dir, \ + media_repository, \ + federation_inbound, \ + federation_reader, \ + federation_sender, \ + synchrotron, \ + appservice, \ + pusher" \ + # Run the script that writes the necessary config files and starts supervisord, which in turn + # starts everything else + /configure_workers_and_start.py + +HEALTHCHECK --start-period=5s --interval=1s --timeout=1s \ + CMD /bin/sh /healthcheck.sh diff --git a/docker/complement/conf-workers/caddy.complement.json b/docker/complement/conf-workers/caddy.complement.json new file mode 100644 index 0000000000..09e2136af2 --- /dev/null +++ b/docker/complement/conf-workers/caddy.complement.json @@ -0,0 +1,72 @@ +{ + "apps": { + "http": { + "servers": { + "srv0": { + "listen": [ + ":8448" + ], + "routes": [ + { + "match": [ + { + "host": [ + "{{ server_name }}" + ] + } + ], + "handle": [ + { + "handler": "subroute", + "routes": [ + { + "handle": [ + { + "handler": "reverse_proxy", + "upstreams": [ + { + "dial": "localhost:8008" + } + ] + } + ] + } + ] + } + ], + "terminal": true + } + ] + } + } + }, + "tls": { + "automation": { + "policies": [ + { + "subjects": [ + "{{ server_name }}" + ], + "issuers": [ + { + "module": "internal" + } + ], + "on_demand": true + } + ] + } + }, + "pki": { + "certificate_authorities": { + "local": { + "name": "Complement CA", + "root": { + "certificate": "/complement/ca/ca.crt", + "private_key": "/complement/ca/ca.key" + } + } + } + } + } + } diff --git a/docker/complement/conf-workers/workers-shared.yaml b/docker/complement/conf-workers/workers-shared.yaml new file mode 100644 index 0000000000..cdadb736f6 --- /dev/null +++ b/docker/complement/conf-workers/workers-shared.yaml @@ -0,0 +1,77 @@ +## Server ## +report_stats: False +trusted_key_servers: [] +enable_registration: true +enable_registration_without_verification: true +bcrypt_rounds: 4 + +## Federation ## + +# disable verification of federation certificates +# +# TODO: Figure out why this is still needed even though we are making use of the custom CA +federation_verify_certificates: false + +# trust certs signed by Complement's CA +federation_custom_ca_list: +- /complement/ca/ca.crt + +# unblacklist RFC1918 addresses +federation_ip_range_blacklist: [] + +# Disable server rate-limiting +rc_federation: + window_size: 1000 + sleep_limit: 10 + sleep_delay: 500 + reject_limit: 99999 + concurrent: 3 + +rc_message: + per_second: 9999 + burst_count: 9999 + +rc_registration: + per_second: 9999 + burst_count: 9999 + +rc_login: + address: + per_second: 9999 + burst_count: 9999 + account: + per_second: 9999 + burst_count: 9999 + failed_attempts: + per_second: 9999 + burst_count: 9999 + +rc_admin_redaction: + per_second: 9999 + burst_count: 9999 + +rc_joins: + local: + per_second: 9999 + burst_count: 9999 + remote: + per_second: 9999 + burst_count: 9999 + +federation_rr_transactions_per_room_per_second: 9999 + +## Experimental Features ## + +experimental_features: + # Enable history backfilling support + msc2716_enabled: true + # Enable spaces support + spaces_enabled: true + # Enable jump to date endpoint + msc3030_enabled: true + +server_notices: + system_mxid_localpart: _server + system_mxid_display_name: "Server Alert" + system_mxid_avatar_url: "" + room_name: "Server Alert" diff --git a/docker/complement/conf/homeserver.yaml b/docker/complement/conf/homeserver.yaml new file mode 100644 index 0000000000..be53c4aa2e --- /dev/null +++ b/docker/complement/conf/homeserver.yaml @@ -0,0 +1,122 @@ +## Server ## + +server_name: SERVER_NAME +log_config: /conf/log_config.yaml +report_stats: False +signing_key_path: /conf/server.signing.key +trusted_key_servers: [] +enable_registration: true +enable_registration_without_verification: true + +## Listeners ## + +tls_certificate_path: /conf/server.tls.crt +tls_private_key_path: /conf/server.tls.key +bcrypt_rounds: 4 +registration_shared_secret: complement + +listeners: + - port: 8448 + bind_addresses: ['::'] + type: http + tls: true + resources: + - names: [federation] + + - port: 8008 + bind_addresses: ['::'] + type: http + + resources: + - names: [client] + +## Database ## + +database: + name: "sqlite3" + args: + # We avoid /data, as it is a volume and is not transferred when the container is committed, + # which is a fundamental necessity in complement. + database: "/conf/homeserver.db" + +## Federation ## + + +# disable verification of federation certificates +# +# TODO: this is temporary; see +# https://github.com/matrix-org/synapse/issues/11803 +federation_verify_certificates: false + +# trust certs signed by the complement CA +federation_custom_ca_list: +- /complement/ca/ca.crt + +# unblacklist RFC1918 addresses +ip_range_blacklist: [] + +# Disable server rate-limiting +rc_federation: + window_size: 1000 + sleep_limit: 10 + sleep_delay: 500 + reject_limit: 99999 + concurrent: 3 + +rc_message: + per_second: 9999 + burst_count: 9999 + +rc_registration: + per_second: 9999 + burst_count: 9999 + +rc_login: + address: + per_second: 9999 + burst_count: 9999 + account: + per_second: 9999 + burst_count: 9999 + failed_attempts: + per_second: 9999 + burst_count: 9999 + +rc_admin_redaction: + per_second: 9999 + burst_count: 9999 + +rc_joins: + local: + per_second: 9999 + burst_count: 9999 + remote: + per_second: 9999 + burst_count: 9999 + +federation_rr_transactions_per_room_per_second: 9999 + +## API Configuration ## + +# A list of application service config files to use +# +app_service_config_files: +AS_REGISTRATION_FILES + +## Experimental Features ## + +experimental_features: + # Enable spaces support + spaces_enabled: true + # Enable history backfilling support + msc2716_enabled: true + # server-side support for partial state in /send_join + msc3706_enabled: true + # Enable jump to date endpoint + msc3030_enabled: true + +server_notices: + system_mxid_localpart: _server + system_mxid_display_name: "Server Alert" + system_mxid_avatar_url: "" + room_name: "Server Alert" diff --git a/docker/complement/conf/log_config.yaml b/docker/complement/conf/log_config.yaml new file mode 100644 index 0000000000..c33fd6cd00 --- /dev/null +++ b/docker/complement/conf/log_config.yaml @@ -0,0 +1,24 @@ +version: 1 + +formatters: + precise: + format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s' + +filters: + context: + (): synapse.logging.context.LoggingContextFilter + request: "" + +handlers: + console: + class: logging.StreamHandler + formatter: precise + filters: [context] + # log to stdout, for easier use with 'docker logs' + stream: 'ext://sys.stdout' + +root: + level: INFO + handlers: [console] + +disable_existing_loggers: false diff --git a/docker/complement/conf/start.sh b/docker/complement/conf/start.sh new file mode 100755 index 0000000000..5d8d0fe016 --- /dev/null +++ b/docker/complement/conf/start.sh @@ -0,0 +1,30 @@ +#!/bin/sh + +set -e + +sed -i "s/SERVER_NAME/${SERVER_NAME}/g" /conf/homeserver.yaml + +# Add the application service registration files to the homeserver.yaml config +for filename in /complement/appservice/*.yaml; do + [ -f "$filename" ] || break + + as_id=$(basename "$filename" .yaml) + + # Insert the path to the registration file and the AS_REGISTRATION_FILES marker after + # so we can add the next application service in the next iteration of this for loop + sed -i "s/AS_REGISTRATION_FILES/ - \/complement\/appservice\/${as_id}.yaml\nAS_REGISTRATION_FILES/g" /conf/homeserver.yaml +done +# Remove the AS_REGISTRATION_FILES entry +sed -i "s/AS_REGISTRATION_FILES//g" /conf/homeserver.yaml + +# generate an ssl key and cert for the server, signed by the complement CA +openssl genrsa -out /conf/server.tls.key 2048 + +openssl req -new -key /conf/server.tls.key -out /conf/server.tls.csr \ + -subj "/CN=${SERVER_NAME}" +openssl x509 -req -in /conf/server.tls.csr \ + -CA /complement/ca/ca.crt -CAkey /complement/ca/ca.key -set_serial 1 \ + -out /conf/server.tls.crt + +exec python -m synapse.app.homeserver -c /conf/homeserver.yaml "$@" + diff --git a/mypy.ini b/mypy.ini index 791e1c72d9..ab1a79bb7d 100644 --- a/mypy.ini +++ b/mypy.ini @@ -92,6 +92,9 @@ exclude = (?x) |tests/utils.py )$ +[mypy-synapse._scripts.*] +disallow_untyped_defs = True + [mypy-synapse.api.*] disallow_untyped_defs = True diff --git a/scripts-dev/complement.sh b/scripts-dev/complement.sh index d1b59ff040..05e9e470ed 100755 --- a/scripts-dev/complement.sh +++ b/scripts-dev/complement.sh @@ -50,25 +50,18 @@ if [[ -n "$WORKERS" ]]; then export COMPLEMENT_BASE_IMAGE=complement-synapse-workers COMPLEMENT_DOCKERFILE=SynapseWorkers.Dockerfile + # And provide some more configuration to complement. - export COMPLEMENT_CA=true export COMPLEMENT_SPAWN_HS_TIMEOUT_SECS=25 else export COMPLEMENT_BASE_IMAGE=complement-synapse - COMPLEMENT_DOCKERFILE=Synapse.Dockerfile + COMPLEMENT_DOCKERFILE=Dockerfile fi # Build the Complement image from the Synapse image we just built. -docker build -t $COMPLEMENT_BASE_IMAGE -f "$COMPLEMENT_DIR/dockerfiles/$COMPLEMENT_DOCKERFILE" "$COMPLEMENT_DIR/dockerfiles" - -cd "$COMPLEMENT_DIR" - -EXTRA_COMPLEMENT_ARGS="" -if [[ -n "$1" ]]; then - # A test name regex has been set, supply it to Complement - EXTRA_COMPLEMENT_ARGS+="-run $1 " -fi +docker build -t $COMPLEMENT_BASE_IMAGE -f "docker/complement/$COMPLEMENT_DOCKERFILE" "docker/complement" # Run the tests! echo "Images built; running complement" -go test -v -tags synapse_blacklist,msc2716,msc3030 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/... +cd "$COMPLEMENT_DIR" +go test -v -tags synapse_blacklist,msc2716,msc3030 -count=1 "$@" ./tests/... diff --git a/synapse/_scripts/export_signing_key.py b/synapse/_scripts/export_signing_key.py index 66481533e9..12c890bdbd 100755 --- a/synapse/_scripts/export_signing_key.py +++ b/synapse/_scripts/export_signing_key.py @@ -15,19 +15,19 @@ import argparse import sys import time -from typing import Optional +from typing import NoReturn, Optional from signedjson.key import encode_verify_key_base64, get_verify_key, read_signing_keys from signedjson.types import VerifyKey -def exit(status: int = 0, message: Optional[str] = None): +def exit(status: int = 0, message: Optional[str] = None) -> NoReturn: if message: print(message, file=sys.stderr) sys.exit(status) -def format_plain(public_key: VerifyKey): +def format_plain(public_key: VerifyKey) -> None: print( "%s:%s %s" % ( @@ -38,7 +38,7 @@ def format_plain(public_key: VerifyKey): ) -def format_for_config(public_key: VerifyKey, expiry_ts: int): +def format_for_config(public_key: VerifyKey, expiry_ts: int) -> None: print( ' "%s:%s": { key: "%s", expired_ts: %i }' % ( @@ -50,7 +50,7 @@ def format_for_config(public_key: VerifyKey, expiry_ts: int): ) -def main(): +def main() -> None: parser = argparse.ArgumentParser() parser.add_argument( @@ -94,7 +94,6 @@ def main(): message="Error reading key from file %s: %s %s" % (file.name, type(e), e), ) - res = [] for key in res: formatter(get_verify_key(key)) diff --git a/synapse/_scripts/generate_config.py b/synapse/_scripts/generate_config.py index 75fce20b12..08eb8ef114 100755 --- a/synapse/_scripts/generate_config.py +++ b/synapse/_scripts/generate_config.py @@ -7,7 +7,7 @@ import sys from synapse.config.homeserver import HomeServerConfig -def main(): +def main() -> None: parser = argparse.ArgumentParser() parser.add_argument( "--config-dir", diff --git a/synapse/_scripts/generate_log_config.py b/synapse/_scripts/generate_log_config.py index 82fc763140..7ae08ec0e3 100755 --- a/synapse/_scripts/generate_log_config.py +++ b/synapse/_scripts/generate_log_config.py @@ -20,7 +20,7 @@ import sys from synapse.config.logger import DEFAULT_LOG_CONFIG -def main(): +def main() -> None: parser = argparse.ArgumentParser() parser.add_argument( diff --git a/synapse/_scripts/generate_signing_key.py b/synapse/_scripts/generate_signing_key.py index bc26d25bfd..3f8f5da75f 100755 --- a/synapse/_scripts/generate_signing_key.py +++ b/synapse/_scripts/generate_signing_key.py @@ -20,7 +20,7 @@ from signedjson.key import generate_signing_key, write_signing_keys from synapse.util.stringutils import random_string -def main(): +def main() -> None: parser = argparse.ArgumentParser() parser.add_argument( diff --git a/synapse/_scripts/hash_password.py b/synapse/_scripts/hash_password.py index 708640c7de..3aa29de5bd 100755 --- a/synapse/_scripts/hash_password.py +++ b/synapse/_scripts/hash_password.py @@ -9,7 +9,7 @@ import bcrypt import yaml -def prompt_for_pass(): +def prompt_for_pass() -> str: password = getpass.getpass("Password: ") if not password: @@ -23,7 +23,7 @@ def prompt_for_pass(): return password -def main(): +def main() -> None: bcrypt_rounds = 12 password_pepper = "" diff --git a/synapse/_scripts/move_remote_media_to_new_store.py b/synapse/_scripts/move_remote_media_to_new_store.py index f53bf790af..819afaaca6 100755 --- a/synapse/_scripts/move_remote_media_to_new_store.py +++ b/synapse/_scripts/move_remote_media_to_new_store.py @@ -42,7 +42,7 @@ from synapse.rest.media.v1.filepath import MediaFilePaths logger = logging.getLogger() -def main(src_repo, dest_repo): +def main(src_repo: str, dest_repo: str) -> None: src_paths = MediaFilePaths(src_repo) dest_paths = MediaFilePaths(dest_repo) for line in sys.stdin: @@ -55,14 +55,19 @@ def main(src_repo, dest_repo): move_media(parts[0], parts[1], src_paths, dest_paths) -def move_media(origin_server, file_id, src_paths, dest_paths): +def move_media( + origin_server: str, + file_id: str, + src_paths: MediaFilePaths, + dest_paths: MediaFilePaths, +) -> None: """Move the given file, and any thumbnails, to the dest repo Args: - origin_server (str): - file_id (str): - src_paths (MediaFilePaths): - dest_paths (MediaFilePaths): + origin_server: + file_id: + src_paths: + dest_paths: """ logger.info("%s/%s", origin_server, file_id) @@ -91,7 +96,7 @@ def move_media(origin_server, file_id, src_paths, dest_paths): ) -def mkdir_and_move(original_file, dest_file): +def mkdir_and_move(original_file: str, dest_file: str) -> None: dirname = os.path.dirname(dest_file) if not os.path.exists(dirname): logger.debug("mkdir %s", dirname) diff --git a/synapse/_scripts/register_new_matrix_user.py b/synapse/_scripts/register_new_matrix_user.py index 4ffe6a1ef3..092601f530 100644 --- a/synapse/_scripts/register_new_matrix_user.py +++ b/synapse/_scripts/register_new_matrix_user.py @@ -22,7 +22,7 @@ import logging import sys from typing import Callable, Optional -import requests as _requests +import requests import yaml @@ -33,7 +33,6 @@ def request_registration( shared_secret: str, admin: bool = False, user_type: Optional[str] = None, - requests=_requests, _print: Callable[[str], None] = print, exit: Callable[[int], None] = sys.exit, ) -> None: diff --git a/synapse/_scripts/synapse_port_db.py b/synapse/_scripts/synapse_port_db.py index 123eaae5c5..12ff79f6e2 100755 --- a/synapse/_scripts/synapse_port_db.py +++ b/synapse/_scripts/synapse_port_db.py @@ -22,10 +22,26 @@ import sys import time import traceback from types import TracebackType -from typing import Dict, Iterable, Optional, Set, Tuple, Type, cast +from typing import ( + Any, + Awaitable, + Callable, + Dict, + Generator, + Iterable, + List, + NoReturn, + Optional, + Set, + Tuple, + Type, + TypeVar, + cast, +) import yaml from matrix_common.versionstring import get_distribution_version_string +from typing_extensions import TypedDict from twisted.internet import defer, reactor as reactor_ @@ -36,7 +52,7 @@ from synapse.logging.context import ( make_deferred_yieldable, run_in_background, ) -from synapse.storage.database import DatabasePool, make_conn +from synapse.storage.database import DatabasePool, LoggingTransaction, make_conn from synapse.storage.databases.main import PushRuleStore from synapse.storage.databases.main.account_data import AccountDataWorkerStore from synapse.storage.databases.main.client_ips import ClientIpBackgroundUpdateStore @@ -173,6 +189,8 @@ end_error_exec_info: Optional[ Tuple[Type[BaseException], BaseException, TracebackType] ] = None +R = TypeVar("R") + class Store( ClientIpBackgroundUpdateStore, @@ -195,17 +213,19 @@ class Store( PresenceBackgroundUpdateStore, GroupServerWorkerStore, ): - def execute(self, f, *args, **kwargs): + def execute(self, f: Callable[..., R], *args: Any, **kwargs: Any) -> Awaitable[R]: return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs) - def execute_sql(self, sql, *args): - def r(txn): + def execute_sql(self, sql: str, *args: object) -> Awaitable[List[Tuple]]: + def r(txn: LoggingTransaction) -> List[Tuple]: txn.execute(sql, args) return txn.fetchall() return self.db_pool.runInteraction("execute_sql", r) - def insert_many_txn(self, txn, table, headers, rows): + def insert_many_txn( + self, txn: LoggingTransaction, table: str, headers: List[str], rows: List[Tuple] + ) -> None: sql = "INSERT INTO %s (%s) VALUES (%s)" % ( table, ", ".join(k for k in headers), @@ -218,14 +238,15 @@ class Store( logger.exception("Failed to insert: %s", table) raise - def set_room_is_public(self, room_id, is_public): + # Note: the parent method is an `async def`. + def set_room_is_public(self, room_id: str, is_public: bool) -> NoReturn: raise Exception( "Attempt to set room_is_public during port_db: database not empty?" ) class MockHomeserver: - def __init__(self, config): + def __init__(self, config: HomeServerConfig): self.clock = Clock(reactor) self.config = config self.hostname = config.server.server_name @@ -233,24 +254,30 @@ class MockHomeserver: "matrix-synapse" ) - def get_clock(self): + def get_clock(self) -> Clock: return self.clock - def get_reactor(self): + def get_reactor(self) -> ISynapseReactor: return reactor - def get_instance_name(self): + def get_instance_name(self) -> str: return "master" class Porter: - def __init__(self, sqlite_config, progress, batch_size, hs_config): + def __init__( + self, + sqlite_config: Dict[str, Any], + progress: "Progress", + batch_size: int, + hs_config: HomeServerConfig, + ): self.sqlite_config = sqlite_config self.progress = progress self.batch_size = batch_size self.hs_config = hs_config - async def setup_table(self, table): + async def setup_table(self, table: str) -> Tuple[str, int, int, int, int]: if table in APPEND_ONLY_TABLES: # It's safe to just carry on inserting. row = await self.postgres_store.db_pool.simple_select_one( @@ -292,7 +319,7 @@ class Porter: ) else: - def delete_all(txn): + def delete_all(txn: LoggingTransaction) -> None: txn.execute( "DELETE FROM port_from_sqlite3 WHERE table_name = %s", (table,) ) @@ -317,7 +344,7 @@ class Porter: async def get_table_constraints(self) -> Dict[str, Set[str]]: """Returns a map of tables that have foreign key constraints to tables they depend on.""" - def _get_constraints(txn): + def _get_constraints(txn: LoggingTransaction) -> Dict[str, Set[str]]: # We can pull the information about foreign key constraints out from # the postgres schema tables. sql = """ @@ -343,8 +370,13 @@ class Porter: ) async def handle_table( - self, table, postgres_size, table_size, forward_chunk, backward_chunk - ): + self, + table: str, + postgres_size: int, + table_size: int, + forward_chunk: int, + backward_chunk: int, + ) -> None: logger.info( "Table %s: %i/%i (rows %i-%i) already ported", table, @@ -391,7 +423,9 @@ class Porter: while True: - def r(txn): + def r( + txn: LoggingTransaction, + ) -> Tuple[Optional[List[str]], List[Tuple], List[Tuple]]: forward_rows = [] backward_rows = [] if do_forward[0]: @@ -418,6 +452,7 @@ class Porter: ) if frows or brows: + assert headers is not None if frows: forward_chunk = max(row[0] for row in frows) + 1 if brows: @@ -426,7 +461,8 @@ class Porter: rows = frows + brows rows = self._convert_rows(table, headers, rows) - def insert(txn): + def insert(txn: LoggingTransaction) -> None: + assert headers is not None self.postgres_store.insert_many_txn(txn, table, headers[1:], rows) self.postgres_store.db_pool.simple_update_one_txn( @@ -448,8 +484,12 @@ class Porter: return async def handle_search_table( - self, postgres_size, table_size, forward_chunk, backward_chunk - ): + self, + postgres_size: int, + table_size: int, + forward_chunk: int, + backward_chunk: int, + ) -> None: select = ( "SELECT es.rowid, es.*, e.origin_server_ts, e.stream_ordering" " FROM event_search as es" @@ -460,7 +500,7 @@ class Porter: while True: - def r(txn): + def r(txn: LoggingTransaction) -> Tuple[List[str], List[Tuple]]: txn.execute(select, (forward_chunk, self.batch_size)) rows = txn.fetchall() headers = [column[0] for column in txn.description] @@ -474,7 +514,7 @@ class Porter: # We have to treat event_search differently since it has a # different structure in the two different databases. - def insert(txn): + def insert(txn: LoggingTransaction) -> None: sql = ( "INSERT INTO event_search (event_id, room_id, key," " sender, vector, origin_server_ts, stream_ordering)" @@ -528,7 +568,7 @@ class Porter: self, db_config: DatabaseConnectionConfig, allow_outdated_version: bool = False, - ): + ) -> Store: """Builds and returns a database store using the provided configuration. Args: @@ -556,7 +596,7 @@ class Porter: return store - async def run_background_updates_on_postgres(self): + async def run_background_updates_on_postgres(self) -> None: # Manually apply all background updates on the PostgreSQL database. postgres_ready = ( await self.postgres_store.db_pool.updates.has_completed_background_updates() @@ -568,12 +608,12 @@ class Porter: self.progress.set_state("Running background updates on PostgreSQL") while not postgres_ready: - await self.postgres_store.db_pool.updates.do_next_background_update(100) + await self.postgres_store.db_pool.updates.do_next_background_update(True) postgres_ready = await ( self.postgres_store.db_pool.updates.has_completed_background_updates() ) - async def run(self): + async def run(self) -> None: """Ports the SQLite database to a PostgreSQL database. When a fatal error is met, its message is assigned to the global "end_error" @@ -609,7 +649,7 @@ class Porter: self.progress.set_state("Creating port tables") - def create_port_table(txn): + def create_port_table(txn: LoggingTransaction) -> None: txn.execute( "CREATE TABLE IF NOT EXISTS port_from_sqlite3 (" " table_name varchar(100) NOT NULL UNIQUE," @@ -622,7 +662,7 @@ class Porter: # We want people to be able to rerun this script from an old port # so that they can pick up any missing events that were not # ported across. - def alter_table(txn): + def alter_table(txn: LoggingTransaction) -> None: txn.execute( "ALTER TABLE IF EXISTS port_from_sqlite3" " RENAME rowid TO forward_rowid" @@ -742,7 +782,9 @@ class Porter: finally: reactor.stop() - def _convert_rows(self, table, headers, rows): + def _convert_rows( + self, table: str, headers: List[str], rows: List[Tuple] + ) -> List[Tuple]: bool_col_names = BOOLEAN_COLUMNS.get(table, []) bool_cols = [i for i, h in enumerate(headers) if h in bool_col_names] @@ -750,7 +792,7 @@ class Porter: class BadValueException(Exception): pass - def conv(j, col): + def conv(j: int, col: object) -> object: if j in bool_cols: return bool(col) if isinstance(col, bytes): @@ -776,7 +818,7 @@ class Porter: return outrows - async def _setup_sent_transactions(self): + async def _setup_sent_transactions(self) -> Tuple[int, int, int]: # Only save things from the last day yesterday = int(time.time() * 1000) - 86400000 @@ -788,10 +830,10 @@ class Porter: ")" ) - def r(txn): + def r(txn: LoggingTransaction) -> Tuple[List[str], List[Tuple]]: txn.execute(select) rows = txn.fetchall() - headers = [column[0] for column in txn.description] + headers: List[str] = [column[0] for column in txn.description] ts_ind = headers.index("ts") @@ -805,7 +847,7 @@ class Porter: if inserted_rows: max_inserted_rowid = max(r[0] for r in rows) - def insert(txn): + def insert(txn: LoggingTransaction) -> None: self.postgres_store.insert_many_txn( txn, "sent_transactions", headers[1:], rows ) @@ -814,7 +856,7 @@ class Porter: else: max_inserted_rowid = 0 - def get_start_id(txn): + def get_start_id(txn: LoggingTransaction) -> int: txn.execute( "SELECT rowid FROM sent_transactions WHERE ts >= ?" " ORDER BY rowid ASC LIMIT 1", @@ -839,12 +881,13 @@ class Porter: }, ) - def get_sent_table_size(txn): + def get_sent_table_size(txn: LoggingTransaction) -> int: txn.execute( "SELECT count(*) FROM sent_transactions" " WHERE ts >= ?", (yesterday,) ) - (size,) = txn.fetchone() - return int(size) + result = txn.fetchone() + assert result is not None + return int(result[0]) remaining_count = await self.sqlite_store.execute(get_sent_table_size) @@ -852,25 +895,35 @@ class Porter: return next_chunk, inserted_rows, total_count - async def _get_remaining_count_to_port(self, table, forward_chunk, backward_chunk): - frows = await self.sqlite_store.execute_sql( - "SELECT count(*) FROM %s WHERE rowid >= ?" % (table,), forward_chunk + async def _get_remaining_count_to_port( + self, table: str, forward_chunk: int, backward_chunk: int + ) -> int: + frows = cast( + List[Tuple[int]], + await self.sqlite_store.execute_sql( + "SELECT count(*) FROM %s WHERE rowid >= ?" % (table,), forward_chunk + ), ) - brows = await self.sqlite_store.execute_sql( - "SELECT count(*) FROM %s WHERE rowid <= ?" % (table,), backward_chunk + brows = cast( + List[Tuple[int]], + await self.sqlite_store.execute_sql( + "SELECT count(*) FROM %s WHERE rowid <= ?" % (table,), backward_chunk + ), ) return frows[0][0] + brows[0][0] - async def _get_already_ported_count(self, table): + async def _get_already_ported_count(self, table: str) -> int: rows = await self.postgres_store.execute_sql( "SELECT count(*) FROM %s" % (table,) ) return rows[0][0] - async def _get_total_count_to_port(self, table, forward_chunk, backward_chunk): + async def _get_total_count_to_port( + self, table: str, forward_chunk: int, backward_chunk: int + ) -> Tuple[int, int]: remaining, done = await make_deferred_yieldable( defer.gatherResults( [ @@ -891,14 +944,17 @@ class Porter: return done, remaining + done async def _setup_state_group_id_seq(self) -> None: - curr_id = await self.sqlite_store.db_pool.simple_select_one_onecol( + curr_id: Optional[ + int + ] = await self.sqlite_store.db_pool.simple_select_one_onecol( table="state_groups", keyvalues={}, retcol="MAX(id)", allow_none=True ) if not curr_id: return - def r(txn): + def r(txn: LoggingTransaction) -> None: + assert curr_id is not None next_id = curr_id + 1 txn.execute("ALTER SEQUENCE state_group_id_seq RESTART WITH %s", (next_id,)) @@ -909,7 +965,7 @@ class Porter: "setup_user_id_seq", find_max_generated_user_id_localpart ) - def r(txn): + def r(txn: LoggingTransaction) -> None: next_id = curr_id + 1 txn.execute("ALTER SEQUENCE user_id_seq RESTART WITH %s", (next_id,)) @@ -931,7 +987,7 @@ class Porter: allow_none=True, ) - def _setup_events_stream_seqs_set_pos(txn): + def _setup_events_stream_seqs_set_pos(txn: LoggingTransaction) -> None: if curr_forward_id: txn.execute( "ALTER SEQUENCE events_stream_seq RESTART WITH %s", @@ -955,17 +1011,20 @@ class Porter: """Set a sequence to the correct value.""" current_stream_ids = [] for stream_id_table in stream_id_tables: - max_stream_id = await self.sqlite_store.db_pool.simple_select_one_onecol( - table=stream_id_table, - keyvalues={}, - retcol="COALESCE(MAX(stream_id), 1)", - allow_none=True, + max_stream_id = cast( + int, + await self.sqlite_store.db_pool.simple_select_one_onecol( + table=stream_id_table, + keyvalues={}, + retcol="COALESCE(MAX(stream_id), 1)", + allow_none=True, + ), ) current_stream_ids.append(max_stream_id) next_id = max(current_stream_ids) + 1 - def r(txn): + def r(txn: LoggingTransaction) -> None: sql = "ALTER SEQUENCE %s RESTART WITH" % (sequence_name,) txn.execute(sql + " %s", (next_id,)) @@ -974,14 +1033,18 @@ class Porter: ) async def _setup_auth_chain_sequence(self) -> None: - curr_chain_id = await self.sqlite_store.db_pool.simple_select_one_onecol( + curr_chain_id: Optional[ + int + ] = await self.sqlite_store.db_pool.simple_select_one_onecol( table="event_auth_chains", keyvalues={}, retcol="MAX(chain_id)", allow_none=True, ) - def r(txn): + def r(txn: LoggingTransaction) -> None: + # Presumably there is at least one row in event_auth_chains. + assert curr_chain_id is not None txn.execute( "ALTER SEQUENCE event_auth_chain_id RESTART WITH %s", (curr_chain_id + 1,), @@ -999,15 +1062,22 @@ class Porter: ############################################## -class Progress(object): +class TableProgress(TypedDict): + start: int + num_done: int + total: int + perc: int + + +class Progress: """Used to report progress of the port""" - def __init__(self): - self.tables = {} + def __init__(self) -> None: + self.tables: Dict[str, TableProgress] = {} self.start_time = int(time.time()) - def add_table(self, table, cur, size): + def add_table(self, table: str, cur: int, size: int) -> None: self.tables[table] = { "start": cur, "num_done": cur, @@ -1015,19 +1085,22 @@ class Progress(object): "perc": int(cur * 100 / size), } - def update(self, table, num_done): + def update(self, table: str, num_done: int) -> None: data = self.tables[table] data["num_done"] = num_done data["perc"] = int(num_done * 100 / data["total"]) - def done(self): + def done(self) -> None: + pass + + def set_state(self, state: str) -> None: pass class CursesProgress(Progress): """Reports progress to a curses window""" - def __init__(self, stdscr): + def __init__(self, stdscr: "curses.window"): self.stdscr = stdscr curses.use_default_colors() @@ -1045,7 +1118,7 @@ class CursesProgress(Progress): super(CursesProgress, self).__init__() - def update(self, table, num_done): + def update(self, table: str, num_done: int) -> None: super(CursesProgress, self).update(table, num_done) self.total_processed = 0 @@ -1056,7 +1129,7 @@ class CursesProgress(Progress): self.render() - def render(self, force=False): + def render(self, force: bool = False) -> None: now = time.time() if not force and now - self.last_update < 0.2: @@ -1128,12 +1201,12 @@ class CursesProgress(Progress): self.stdscr.refresh() self.last_update = time.time() - def done(self): + def done(self) -> None: self.finished = True self.render(True) self.stdscr.getch() - def set_state(self, state): + def set_state(self, state: str) -> None: self.stdscr.clear() self.stdscr.addstr(0, 0, state + "...", curses.A_BOLD) self.stdscr.refresh() @@ -1142,7 +1215,7 @@ class CursesProgress(Progress): class TerminalProgress(Progress): """Just prints progress to the terminal""" - def update(self, table, num_done): + def update(self, table: str, num_done: int) -> None: super(TerminalProgress, self).update(table, num_done) data = self.tables[table] @@ -1151,7 +1224,7 @@ class TerminalProgress(Progress): "%s: %d%% (%d/%d)" % (table, data["perc"], data["num_done"], data["total"]) ) - def set_state(self, state): + def set_state(self, state: str) -> None: print(state + "...") @@ -1159,7 +1232,7 @@ class TerminalProgress(Progress): ############################################## -def main(): +def main() -> None: parser = argparse.ArgumentParser( description="A script to port an existing synapse SQLite database to" " a new PostgreSQL database." @@ -1225,7 +1298,7 @@ def main(): config = HomeServerConfig() config.parse_config_dict(hs_config, "", "") - def start(stdscr=None): + def start(stdscr: Optional["curses.window"] = None) -> None: progress: Progress if stdscr: progress = CursesProgress(stdscr) @@ -1240,7 +1313,7 @@ def main(): ) @defer.inlineCallbacks - def run(): + def run() -> Generator["defer.Deferred[Any]", Any, None]: with LoggingContext("synapse_port_db_run"): yield defer.ensureDeferred(porter.run()) diff --git a/synapse/_scripts/synctl.py b/synapse/_scripts/synctl.py index 1ab36949c7..b4c96ad7f3 100755 --- a/synapse/_scripts/synctl.py +++ b/synapse/_scripts/synctl.py @@ -24,7 +24,7 @@ import signal import subprocess import sys import time -from typing import Iterable, Optional +from typing import Iterable, NoReturn, Optional, TextIO import yaml @@ -45,7 +45,7 @@ one of the following: --------------------------------------------------------------------------------""" -def pid_running(pid): +def pid_running(pid: int) -> bool: try: os.kill(pid, 0) except OSError as err: @@ -68,7 +68,7 @@ def pid_running(pid): return True -def write(message, colour=NORMAL, stream=sys.stdout): +def write(message: str, colour: str = NORMAL, stream: TextIO = sys.stdout) -> None: # Lets check if we're writing to a TTY before colouring should_colour = False try: @@ -84,7 +84,7 @@ def write(message, colour=NORMAL, stream=sys.stdout): stream.write(colour + message + NORMAL + "\n") -def abort(message, colour=RED, stream=sys.stderr): +def abort(message: str, colour: str = RED, stream: TextIO = sys.stderr) -> NoReturn: write(message, colour, stream) sys.exit(1) @@ -166,7 +166,7 @@ Worker = collections.namedtuple( ) -def main(): +def main() -> None: parser = argparse.ArgumentParser() diff --git a/synapse/_scripts/update_synapse_database.py b/synapse/_scripts/update_synapse_database.py index 736f58836d..c443522c05 100755 --- a/synapse/_scripts/update_synapse_database.py +++ b/synapse/_scripts/update_synapse_database.py @@ -38,25 +38,25 @@ logger = logging.getLogger("update_database") class MockHomeserver(HomeServer): DATASTORE_CLASS = DataStore # type: ignore [assignment] - def __init__(self, config, **kwargs): + def __init__(self, config: HomeServerConfig): super(MockHomeserver, self).__init__( - config.server.server_name, reactor=reactor, config=config, **kwargs - ) - - self.version_string = "Synapse/" + get_distribution_version_string( - "matrix-synapse" + hostname=config.server.server_name, + config=config, + reactor=reactor, + version_string="Synapse/" + + get_distribution_version_string("matrix-synapse"), ) -def run_background_updates(hs): +def run_background_updates(hs: HomeServer) -> None: store = hs.get_datastores().main - async def run_background_updates(): + async def run_background_updates() -> None: await store.db_pool.updates.run_background_updates(sleep=False) # Stop the reactor to exit the script once every background update is run. reactor.stop() - def run(): + def run() -> None: # Apply all background updates on the database. defer.ensureDeferred( run_as_background_process("background_updates", run_background_updates) @@ -67,7 +67,7 @@ def run_background_updates(hs): reactor.run() -def main(): +def main() -> None: parser = argparse.ArgumentParser( description=( "Updates a synapse database to the latest schema and optionally runs background updates" diff --git a/synapse/module_api/__init__.py b/synapse/module_api/__init__.py index 9a61593ff5..8f9e629274 100644 --- a/synapse/module_api/__init__.py +++ b/synapse/module_api/__init__.py @@ -119,6 +119,7 @@ from synapse.types import ( from synapse.util import Clock from synapse.util.async_helpers import maybe_awaitable from synapse.util.caches.descriptors import cached +from synapse.util.frozenutils import freeze if TYPE_CHECKING: from synapse.app.generic_worker import GenericWorkerSlavedStore @@ -211,6 +212,7 @@ class ModuleApi: # We expose these as properties below in order to attach a helpful docstring. self._http_client: SimpleHttpClient = hs.get_simple_http_client() self._public_room_list_manager = PublicRoomListManager(hs) + self._account_data_manager = AccountDataManager(hs) self._spam_checker = hs.get_spam_checker() self._account_validity_handler = hs.get_account_validity_handler() @@ -432,6 +434,14 @@ class ModuleApi: return self._public_room_list_manager @property + def account_data_manager(self) -> "AccountDataManager": + """Allows reading and modifying users' account data. + + Added in Synapse v1.57.0. + """ + return self._account_data_manager + + @property def public_baseurl(self) -> str: """The configured public base URL for this homeserver. @@ -1386,3 +1396,69 @@ class PublicRoomListManager: room_id: The ID of the room. """ await self._store.set_room_is_public(room_id, False) + + +class AccountDataManager: + """ + Allows modules to manage account data. + """ + + def __init__(self, hs: "HomeServer") -> None: + self._hs = hs + self._store = hs.get_datastores().main + self._handler = hs.get_account_data_handler() + + def _validate_user_id(self, user_id: str) -> None: + """ + Validates a user ID is valid and local. + Private method to be used in other account data methods. + """ + user = UserID.from_string(user_id) + if not self._hs.is_mine(user): + raise ValueError( + f"{user_id} is not local to this homeserver; can't access account data for remote users." + ) + + async def get_global(self, user_id: str, data_type: str) -> Optional[JsonDict]: + """ + Gets some global account data, of a specified type, for the specified user. + + The provided user ID must be a valid user ID of a local user. + + Added in Synapse v1.57.0. + """ + self._validate_user_id(user_id) + + data = await self._store.get_global_account_data_by_type_for_user( + user_id, data_type + ) + # We clone and freeze to prevent the module accidentally mutating the + # dict that lives in the cache, as that could introduce nasty bugs. + return freeze(data) + + async def put_global( + self, user_id: str, data_type: str, new_data: JsonDict + ) -> None: + """ + Puts some global account data, of a specified type, for the specified user. + + The provided user ID must be a valid user ID of a local user. + + Please note that this will overwrite existing the account data of that type + for that user! + + Added in Synapse v1.57.0. + """ + self._validate_user_id(user_id) + + if not isinstance(data_type, str): + raise TypeError(f"data_type must be a str; got {type(data_type).__name__}") + + if not isinstance(new_data, dict): + raise TypeError(f"new_data must be a dict; got {type(new_data).__name__}") + + # Ensure the user exists, so we don't just write to users that aren't there. + if await self._store.get_userinfo_by_id(user_id) is None: + raise ValueError(f"User {user_id} does not exist on this server.") + + await self._handler.add_account_data_for_user(user_id, data_type, new_data) diff --git a/synapse/storage/types.py b/synapse/storage/types.py index 57f4883bf4..d7d6f1d90e 100644 --- a/synapse/storage/types.py +++ b/synapse/storage/types.py @@ -45,6 +45,7 @@ class Cursor(Protocol): Sequence[ # Note that this is an approximate typing based on sqlite3 and other # drivers, and may not be entirely accurate. + # FWIW, the DBAPI 2 spec is: https://peps.python.org/pep-0249/#description Tuple[ str, Optional[Any], diff --git a/tests/module_api/test_account_data_manager.py b/tests/module_api/test_account_data_manager.py new file mode 100644 index 0000000000..bec018d9e7 --- /dev/null +++ b/tests/module_api/test_account_data_manager.py @@ -0,0 +1,157 @@ +# Copyright 2022 The Matrix.org Foundation C.I.C. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from synapse.api.errors import SynapseError +from synapse.rest import admin + +from tests.unittest import HomeserverTestCase + + +class ModuleApiTestCase(HomeserverTestCase): + servlets = [ + admin.register_servlets, + ] + + def prepare(self, reactor, clock, homeserver) -> None: + self._store = homeserver.get_datastores().main + self._module_api = homeserver.get_module_api() + self._account_data_mgr = self._module_api.account_data_manager + + self.user_id = self.register_user("kristina", "secret") + + def test_get_global(self) -> None: + """ + Tests that getting global account data through the module API works as + expected, including getting `None` for unset account data. + """ + self.get_success( + self._store.add_account_data_for_user( + self.user_id, "test.data", {"wombat": True} + ) + ) + + # Getting existent account data works as expected. + self.assertEqual( + self.get_success( + self._account_data_mgr.get_global(self.user_id, "test.data") + ), + {"wombat": True}, + ) + + # Getting non-existent account data returns None. + self.assertIsNone( + self.get_success( + self._account_data_mgr.get_global(self.user_id, "no.data.at.all") + ) + ) + + def test_get_global_validation(self) -> None: + """ + Tests that invalid or remote user IDs are treated as errors and raised as exceptions, + whilst getting global account data for a user. + + This is a design choice to try and communicate potential bugs to modules + earlier on. + """ + with self.assertRaises(SynapseError): + self.get_success_or_raise( + self._account_data_mgr.get_global("this isn't a user id", "test.data") + ) + + with self.assertRaises(ValueError): + self.get_success_or_raise( + self._account_data_mgr.get_global("@valid.but:remote", "test.data") + ) + + def test_get_global_no_mutability(self) -> None: + """ + Tests that modules can't introduce bugs into Synapse by mutating the result + of `get_global`. + """ + # First add some account data to set up the test. + self.get_success( + self._store.add_account_data_for_user( + self.user_id, "test.data", {"wombat": True} + ) + ) + + # Now request that data and then mutate it (out of negligence or otherwise). + the_data = self.get_success( + self._account_data_mgr.get_global(self.user_id, "test.data") + ) + with self.assertRaises(TypeError): + # This throws an exception because it's a frozen dict. + the_data["wombat"] = False + + def test_put_global(self) -> None: + """ + Tests that written account data using `put_global` can be read out again later. + """ + + self.get_success( + self._module_api.account_data_manager.put_global( + self.user_id, "test.data", {"wombat": True} + ) + ) + + # Request that account data from the normal store; check it's as we expect. + self.assertEqual( + self.get_success( + self._store.get_global_account_data_by_type_for_user( + self.user_id, "test.data" + ) + ), + {"wombat": True}, + ) + + def test_put_global_validation(self) -> None: + """ + Tests that a module can't write account data to user IDs that don't have + actual users registered to them. + Modules also must supply the correct types. + """ + + with self.assertRaises(SynapseError): + self.get_success_or_raise( + self._account_data_mgr.put_global( + "this isn't a user id", "test.data", {} + ) + ) + + with self.assertRaises(ValueError): + self.get_success_or_raise( + self._account_data_mgr.put_global("@valid.but:remote", "test.data", {}) + ) + + with self.assertRaises(ValueError): + self.get_success_or_raise( + self._module_api.account_data_manager.put_global( + "@notregistered:test", "test.data", {} + ) + ) + + with self.assertRaises(TypeError): + # The account data type must be a string. + self.get_success_or_raise( + self._module_api.account_data_manager.put_global( + self.user_id, 42, {} # type: ignore + ) + ) + + with self.assertRaises(TypeError): + # The account data dict must be a dict. + self.get_success_or_raise( + self._module_api.account_data_manager.put_global( + self.user_id, "test.data", 42 # type: ignore + ) + ) diff --git a/tests/scripts/test_new_matrix_user.py b/tests/scripts/test_new_matrix_user.py index 6f3c365c9a..19a145eeb6 100644 --- a/tests/scripts/test_new_matrix_user.py +++ b/tests/scripts/test_new_matrix_user.py @@ -12,7 +12,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -from unittest.mock import Mock +from unittest.mock import Mock, patch from synapse._scripts.register_new_matrix_user import request_registration @@ -52,16 +52,16 @@ class RegisterTestCase(TestCase): out = [] err_code = [] - request_registration( - "user", - "pass", - "matrix.org", - "shared", - admin=False, - requests=requests, - _print=out.append, - exit=err_code.append, - ) + with patch("synapse._scripts.register_new_matrix_user.requests", requests): + request_registration( + "user", + "pass", + "matrix.org", + "shared", + admin=False, + _print=out.append, + exit=err_code.append, + ) # We should get the success message making sure everything is OK. self.assertIn("Success!", out) @@ -88,16 +88,16 @@ class RegisterTestCase(TestCase): out = [] err_code = [] - request_registration( - "user", - "pass", - "matrix.org", - "shared", - admin=False, - requests=requests, - _print=out.append, - exit=err_code.append, - ) + with patch("synapse._scripts.register_new_matrix_user.requests", requests): + request_registration( + "user", + "pass", + "matrix.org", + "shared", + admin=False, + _print=out.append, + exit=err_code.append, + ) # Exit was called self.assertEqual(err_code, [1]) @@ -140,16 +140,16 @@ class RegisterTestCase(TestCase): out = [] err_code = [] - request_registration( - "user", - "pass", - "matrix.org", - "shared", - admin=False, - requests=requests, - _print=out.append, - exit=err_code.append, - ) + with patch("synapse._scripts.register_new_matrix_user.requests", requests): + request_registration( + "user", + "pass", + "matrix.org", + "shared", + admin=False, + _print=out.append, + exit=err_code.append, + ) # Exit was called self.assertEqual(err_code, [1]) |