diff --git a/CHANGES.md b/CHANGES.md
index d8cfbbebef..3cacca5a6b 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -1,3 +1,85 @@
+Synapse 0.99.4rc1 (2019-05-13)
+==============================
+
+Features
+--------
+
+- Add systemd-python to the optional dependencies to enable logging to the systemd journal. Install with `pip install matrix-synapse[systemd]`. ([\#4339](https://github.com/matrix-org/synapse/issues/4339))
+- Add a default .m.rule.tombstone push rule. ([\#4867](https://github.com/matrix-org/synapse/issues/4867))
+- Add ability for password provider modules to bind email addresses to users upon registration. ([\#4947](https://github.com/matrix-org/synapse/issues/4947))
+- Implementation of [MSC1711](https://github.com/matrix-org/matrix-doc/pull/1711) including config options for requiring valid TLS certificates for federation traffic, the ability to disable TLS validation for specific domains, and the ability to specify your own list of CA certificates. ([\#4967](https://github.com/matrix-org/synapse/issues/4967))
+- Remove presence list support as per MSC 1819. ([\#4989](https://github.com/matrix-org/synapse/issues/4989))
+- Reduce CPU usage starting pushers during start up. ([\#4991](https://github.com/matrix-org/synapse/issues/4991))
+- Add a delete group admin API. ([\#5002](https://github.com/matrix-org/synapse/issues/5002))
+- Add config option to block users from looking up 3PIDs. ([\#5010](https://github.com/matrix-org/synapse/issues/5010))
+- Add context to phonehome stats. ([\#5020](https://github.com/matrix-org/synapse/issues/5020))
+- Configure the example systemd units to have a log identifier of `matrix-synapse`
+ instead of the executable name, `python`.
+ Contributed by Christoph Müller. ([\#5023](https://github.com/matrix-org/synapse/issues/5023))
+- Add time-based account expiration. ([\#5027](https://github.com/matrix-org/synapse/issues/5027), [\#5047](https://github.com/matrix-org/synapse/issues/5047), [\#5073](https://github.com/matrix-org/synapse/issues/5073), [\#5116](https://github.com/matrix-org/synapse/issues/5116))
+- Add support for handling /verions, /voip and /push_rules client endpoints to client_reader worker. ([\#5063](https://github.com/matrix-org/synapse/issues/5063), [\#5065](https://github.com/matrix-org/synapse/issues/5065), [\#5070](https://github.com/matrix-org/synapse/issues/5070))
+- Add an configuration option to require authentication on /publicRooms and /profile endpoints. ([\#5083](https://github.com/matrix-org/synapse/issues/5083))
+- Move admin APIs to `/_synapse/admin/v1`. (The old paths are retained for backwards-compatibility, for now). ([\#5119](https://github.com/matrix-org/synapse/issues/5119))
+- Implement an admin API for sending server notices. Many thanks to @krombel who provided a foundation for this work. ([\#5121](https://github.com/matrix-org/synapse/issues/5121), [\#5142](https://github.com/matrix-org/synapse/issues/5142))
+
+
+Bugfixes
+--------
+
+- Avoid redundant URL encoding of redirect URL for SSO login in the fallback login page. Fixes a regression introduced in [#4220](https://github.com/matrix-org/synapse/pull/4220). Contributed by Marcel Fabian Krüger ("[zaugin](https://github.com/zauguin)"). ([\#4555](https://github.com/matrix-org/synapse/issues/4555))
+- Fix bug where presence updates were sent to all servers in a room when a new server joined, rather than to just the new server. ([\#4942](https://github.com/matrix-org/synapse/issues/4942), [\#5103](https://github.com/matrix-org/synapse/issues/5103))
+- Fix sync bug which made accepting invites unreliable in worker-mode synapses. ([\#4955](https://github.com/matrix-org/synapse/issues/4955), [\#4956](https://github.com/matrix-org/synapse/issues/4956))
+- start.sh: Fix the --no-rate-limit option for messages and make it bypass rate limit on registration and login too. ([\#4981](https://github.com/matrix-org/synapse/issues/4981))
+- Transfer related groups on room upgrade. ([\#4990](https://github.com/matrix-org/synapse/issues/4990))
+- Prevent the ability to kick users from a room they aren't in. ([\#4999](https://github.com/matrix-org/synapse/issues/4999))
+- Fix issue #4596 so synapse_port_db script works with --curses option on Python 3. Contributed by Anders Jensen-Waud <anders@jensenwaud.com>. ([\#5003](https://github.com/matrix-org/synapse/issues/5003))
+- Clients timing out/disappearing while downloading from the media repository will now no longer log a spurious "Producer was not unregistered" message. ([\#5009](https://github.com/matrix-org/synapse/issues/5009))
+- Fix "cannot import name execute_batch" error with postgres. ([\#5032](https://github.com/matrix-org/synapse/issues/5032))
+- Fix disappearing exceptions in manhole. ([\#5035](https://github.com/matrix-org/synapse/issues/5035))
+- Workaround bug in twisted where attempting too many concurrent DNS requests could cause it to hang due to running out of file descriptors. ([\#5037](https://github.com/matrix-org/synapse/issues/5037))
+- Make sure we're not registering the same 3pid twice on registration. ([\#5071](https://github.com/matrix-org/synapse/issues/5071))
+- Don't crash on lack of expiry templates. ([\#5077](https://github.com/matrix-org/synapse/issues/5077))
+- Fix the ratelimting on third party invites. ([\#5104](https://github.com/matrix-org/synapse/issues/5104))
+- Add some missing limitations to room alias creation. ([\#5124](https://github.com/matrix-org/synapse/issues/5124), [\#5128](https://github.com/matrix-org/synapse/issues/5128))
+- Limit the number of EDUs in transactions to 100 as expected by synapse. Thanks to @superboum for this work! ([\#5138](https://github.com/matrix-org/synapse/issues/5138))
+- Fix bogus imports in unit tests. ([\#5154](https://github.com/matrix-org/synapse/issues/5154))
+
+
+Internal Changes
+----------------
+
+- Add test to verify threepid auth check added in #4435. ([\#4474](https://github.com/matrix-org/synapse/issues/4474))
+- Fix/improve some docstrings in the replication code. ([\#4949](https://github.com/matrix-org/synapse/issues/4949))
+- Split synapse.replication.tcp.streams into smaller files. ([\#4953](https://github.com/matrix-org/synapse/issues/4953))
+- Refactor replication row generation/parsing. ([\#4954](https://github.com/matrix-org/synapse/issues/4954))
+- Run `black` to clean up formatting on `synapse/storage/roommember.py` and `synapse/storage/events.py`. ([\#4959](https://github.com/matrix-org/synapse/issues/4959))
+- Remove log line for password via the admin API. ([\#4965](https://github.com/matrix-org/synapse/issues/4965))
+- Fix typo in TLS filenames in docker/README.md. Also add the '-p' commandline option to the 'docker run' example. Contributed by Jurrie Overgoor. ([\#4968](https://github.com/matrix-org/synapse/issues/4968))
+- Refactor room version definitions. ([\#4969](https://github.com/matrix-org/synapse/issues/4969))
+- Reduce log level of .well-known/matrix/client responses. ([\#4972](https://github.com/matrix-org/synapse/issues/4972))
+- Add `config.signing_key_path` that can be read by `synapse.config` utility. ([\#4974](https://github.com/matrix-org/synapse/issues/4974))
+- Track which identity server is used when binding a threepid and use that for unbinding, as per MSC1915. ([\#4982](https://github.com/matrix-org/synapse/issues/4982))
+- Rewrite KeyringTestCase as a HomeserverTestCase. ([\#4985](https://github.com/matrix-org/synapse/issues/4985))
+- README updates: Corrected the default POSTGRES_USER. Added port forwarding hint in TLS section. ([\#4987](https://github.com/matrix-org/synapse/issues/4987))
+- Remove a number of unused tables from the database schema. ([\#4992](https://github.com/matrix-org/synapse/issues/4992), [\#5028](https://github.com/matrix-org/synapse/issues/5028), [\#5033](https://github.com/matrix-org/synapse/issues/5033))
+- Run `black` on the remainder of `synapse/storage/`. ([\#4996](https://github.com/matrix-org/synapse/issues/4996))
+- Fix grammar in get_current_users_in_room and give it a docstring. ([\#4998](https://github.com/matrix-org/synapse/issues/4998))
+- Clean up some code in the server-key Keyring. ([\#5001](https://github.com/matrix-org/synapse/issues/5001))
+- Convert SYNAPSE_NO_TLS Docker variable to boolean for user friendliness. Contributed by Gabriel Eckerson. ([\#5005](https://github.com/matrix-org/synapse/issues/5005))
+- Refactor synapse.storage._base._simple_select_list_paginate. ([\#5007](https://github.com/matrix-org/synapse/issues/5007))
+- Store the notary server name correctly in server_keys_json. ([\#5024](https://github.com/matrix-org/synapse/issues/5024))
+- Rewrite Datastore.get_server_verify_keys to reduce the number of database transactions. ([\#5030](https://github.com/matrix-org/synapse/issues/5030))
+- Remove extraneous period from copyright headers. ([\#5046](https://github.com/matrix-org/synapse/issues/5046))
+- Update documentation for where to get Synapse packages. ([\#5067](https://github.com/matrix-org/synapse/issues/5067))
+- Add workarounds for pep-517 install errors. ([\#5098](https://github.com/matrix-org/synapse/issues/5098))
+- Improve logging when event-signature checks fail. ([\#5100](https://github.com/matrix-org/synapse/issues/5100))
+- Factor out an "assert_requester_is_admin" function. ([\#5120](https://github.com/matrix-org/synapse/issues/5120))
+- Remove the requirement to authenticate for /admin/server_version. ([\#5122](https://github.com/matrix-org/synapse/issues/5122))
+- Prevent an exception from being raised in a IResolutionReceiver and use a more generic error message for blacklisted URL previews. ([\#5155](https://github.com/matrix-org/synapse/issues/5155))
+- Run `black` on the tests directory. ([\#5170](https://github.com/matrix-org/synapse/issues/5170))
+- Fix CI after new release of isort. ([\#5179](https://github.com/matrix-org/synapse/issues/5179))
+
+
Synapse 0.99.3.2 (2019-05-03)
=============================
diff --git a/INSTALL.md b/INSTALL.md
index 1032a2c68e..b88d826f6c 100644
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -257,9 +257,8 @@ https://github.com/spantaleev/matrix-docker-ansible-deploy
#### Matrix.org packages
Matrix.org provides Debian/Ubuntu packages of the latest stable version of
-Synapse via https://packages.matrix.org/debian/. To use them:
-
-For Debian 9 (Stretch), Ubuntu 16.04 (Xenial), and later:
+Synapse via https://packages.matrix.org/debian/. They are available for Debian
+9 (Stretch), Ubuntu 16.04 (Xenial), and later. To use them:
```
sudo apt install -y lsb-release wget apt-transport-https
@@ -270,19 +269,6 @@ sudo apt update
sudo apt install matrix-synapse-py3
```
-For Debian 8 (Jessie):
-
-```
-sudo apt install -y lsb-release wget apt-transport-https
-sudo wget -O /etc/apt/trusted.gpg.d/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
-echo "deb [signed-by=5586CCC0CBBBEFC7A25811ADF473DD4473365DE1] https://packages.matrix.org/debian/ $(lsb_release -cs) main" |
- sudo tee /etc/apt/sources.list.d/matrix-org.list
-sudo apt update
-sudo apt install matrix-synapse-py3
-```
-
-The fingerprint of the repository signing key is AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058.
-
**Note**: if you followed a previous version of these instructions which
recommended using `apt-key add` to add an old key from
`https://matrix.org/packages/debian/`, you should note that this key has been
@@ -290,6 +276,9 @@ revoked. You should remove the old key with `sudo apt-key remove
C35EB17E1EAE708E6603A9B3AD0592FE47F0DF61`, and follow the above instructions to
update your configuration.
+The fingerprint of the repository signing key (as shown by `gpg
+/usr/share/keyrings/matrix-org-archive-keyring.gpg`) is
+`AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`.
#### Downstream Debian/Ubuntu packages
diff --git a/changelog.d/4339.feature b/changelog.d/4339.feature
deleted file mode 100644
index cecff97b80..0000000000
--- a/changelog.d/4339.feature
+++ /dev/null
@@ -1 +0,0 @@
-Add systemd-python to the optional dependencies to enable logging to the systemd journal. Install with `pip install matrix-synapse[systemd]`.
diff --git a/changelog.d/4474.misc b/changelog.d/4474.misc
deleted file mode 100644
index 4b882d60be..0000000000
--- a/changelog.d/4474.misc
+++ /dev/null
@@ -1 +0,0 @@
-Add test to verify threepid auth check added in #4435.
diff --git a/changelog.d/4555.bugfix b/changelog.d/4555.bugfix
deleted file mode 100644
index d596022c3f..0000000000
--- a/changelog.d/4555.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Avoid redundant URL encoding of redirect URL for SSO login in the fallback login page. Fixes a regression introduced in [#4220](https://github.com/matrix-org/synapse/pull/4220). Contributed by Marcel Fabian Krüger ("[zaugin](https://github.com/zauguin)").
diff --git a/changelog.d/4867.feature b/changelog.d/4867.feature
deleted file mode 100644
index f5f9030e22..0000000000
--- a/changelog.d/4867.feature
+++ /dev/null
@@ -1 +0,0 @@
-Add a default .m.rule.tombstone push rule.
diff --git a/changelog.d/4942.bugfix b/changelog.d/4942.bugfix
deleted file mode 100644
index 590d80d58f..0000000000
--- a/changelog.d/4942.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Fix bug where presence updates were sent to all servers in a room when a new server joined, rather than to just the new server.
diff --git a/changelog.d/4947.feature b/changelog.d/4947.feature
deleted file mode 100644
index b9d27b90f1..0000000000
--- a/changelog.d/4947.feature
+++ /dev/null
@@ -1 +0,0 @@
-Add ability for password provider modules to bind email addresses to users upon registration.
\ No newline at end of file
diff --git a/changelog.d/4949.misc b/changelog.d/4949.misc
deleted file mode 100644
index 25c4e05a64..0000000000
--- a/changelog.d/4949.misc
+++ /dev/null
@@ -1 +0,0 @@
-Fix/improve some docstrings in the replication code.
diff --git a/changelog.d/4953.misc b/changelog.d/4953.misc
deleted file mode 100644
index 06a084e6ef..0000000000
--- a/changelog.d/4953.misc
+++ /dev/null
@@ -1,2 +0,0 @@
-Split synapse.replication.tcp.streams into smaller files.
-
diff --git a/changelog.d/4954.misc b/changelog.d/4954.misc
deleted file mode 100644
index 91f145950d..0000000000
--- a/changelog.d/4954.misc
+++ /dev/null
@@ -1 +0,0 @@
-Refactor replication row generation/parsing.
diff --git a/changelog.d/4955.bugfix b/changelog.d/4955.bugfix
deleted file mode 100644
index e50e67383d..0000000000
--- a/changelog.d/4955.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Fix sync bug which made accepting invites unreliable in worker-mode synapses.
diff --git a/changelog.d/4956.bugfix b/changelog.d/4956.bugfix
deleted file mode 100644
index e50e67383d..0000000000
--- a/changelog.d/4956.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Fix sync bug which made accepting invites unreliable in worker-mode synapses.
diff --git a/changelog.d/4959.misc b/changelog.d/4959.misc
deleted file mode 100644
index dd4275501f..0000000000
--- a/changelog.d/4959.misc
+++ /dev/null
@@ -1 +0,0 @@
-Run `black` to clean up formatting on `synapse/storage/roommember.py` and `synapse/storage/events.py`.
\ No newline at end of file
diff --git a/changelog.d/4965.misc b/changelog.d/4965.misc
deleted file mode 100644
index 284c58b75e..0000000000
--- a/changelog.d/4965.misc
+++ /dev/null
@@ -1 +0,0 @@
-Remove log line for password via the admin API.
diff --git a/changelog.d/4967.feature b/changelog.d/4967.feature
deleted file mode 100644
index 7f9f81f849..0000000000
--- a/changelog.d/4967.feature
+++ /dev/null
@@ -1 +0,0 @@
-Implementation of [MSC1711](https://github.com/matrix-org/matrix-doc/pull/1711) including config options for requiring valid TLS certificates for federation traffic, the ability to disable TLS validation for specific domains, and the ability to specify your own list of CA certificates.
diff --git a/changelog.d/4968.misc b/changelog.d/4968.misc
deleted file mode 100644
index 7a7b69771c..0000000000
--- a/changelog.d/4968.misc
+++ /dev/null
@@ -1 +0,0 @@
-Fix typo in TLS filenames in docker/README.md. Also add the '-p' commandline option to the 'docker run' example. Contributed by Jurrie Overgoor.
diff --git a/changelog.d/4969.misc b/changelog.d/4969.misc
deleted file mode 100644
index e3a3214e6b..0000000000
--- a/changelog.d/4969.misc
+++ /dev/null
@@ -1,2 +0,0 @@
-Refactor room version definitions.
-
diff --git a/changelog.d/4972.misc b/changelog.d/4972.misc
deleted file mode 100644
index e104a9bb34..0000000000
--- a/changelog.d/4972.misc
+++ /dev/null
@@ -1 +0,0 @@
-Reduce log level of .well-known/matrix/client responses.
diff --git a/changelog.d/4974.misc b/changelog.d/4974.misc
deleted file mode 100644
index 672a18923a..0000000000
--- a/changelog.d/4974.misc
+++ /dev/null
@@ -1 +0,0 @@
-Add `config.signing_key_path` that can be read by `synapse.config` utility.
diff --git a/changelog.d/4981.bugfix b/changelog.d/4981.bugfix
deleted file mode 100644
index e51b45eec0..0000000000
--- a/changelog.d/4981.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-start.sh: Fix the --no-rate-limit option for messages and make it bypass rate limit on registration and login too.
\ No newline at end of file
diff --git a/changelog.d/4982.misc b/changelog.d/4982.misc
deleted file mode 100644
index 067c177d39..0000000000
--- a/changelog.d/4982.misc
+++ /dev/null
@@ -1 +0,0 @@
-Track which identity server is used when binding a threepid and use that for unbinding, as per MSC1915.
diff --git a/changelog.d/4985.misc b/changelog.d/4985.misc
deleted file mode 100644
index 50c9ff9e0f..0000000000
--- a/changelog.d/4985.misc
+++ /dev/null
@@ -1 +0,0 @@
-Rewrite KeyringTestCase as a HomeserverTestCase.
diff --git a/changelog.d/4987.misc b/changelog.d/4987.misc
deleted file mode 100644
index 33490e146f..0000000000
--- a/changelog.d/4987.misc
+++ /dev/null
@@ -1 +0,0 @@
-README updates: Corrected the default POSTGRES_USER. Added port forwarding hint in TLS section.
diff --git a/changelog.d/4989.feature b/changelog.d/4989.feature
deleted file mode 100644
index a5138f5612..0000000000
--- a/changelog.d/4989.feature
+++ /dev/null
@@ -1 +0,0 @@
-Remove presence list support as per MSC 1819.
diff --git a/changelog.d/4990.bugfix b/changelog.d/4990.bugfix
deleted file mode 100644
index 1b69d058f6..0000000000
--- a/changelog.d/4990.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Transfer related groups on room upgrade.
\ No newline at end of file
diff --git a/changelog.d/4991.feature b/changelog.d/4991.feature
deleted file mode 100644
index 034bf3239c..0000000000
--- a/changelog.d/4991.feature
+++ /dev/null
@@ -1 +0,0 @@
-Reduce CPU usage starting pushers during start up.
diff --git a/changelog.d/4992.misc b/changelog.d/4992.misc
deleted file mode 100644
index 3ee4228c09..0000000000
--- a/changelog.d/4992.misc
+++ /dev/null
@@ -1 +0,0 @@
-Remove a number of unused tables from the database schema.
diff --git a/changelog.d/4996.misc b/changelog.d/4996.misc
deleted file mode 100644
index ecac24e2be..0000000000
--- a/changelog.d/4996.misc
+++ /dev/null
@@ -1 +0,0 @@
-Run `black` on the remainder of `synapse/storage/`.
\ No newline at end of file
diff --git a/changelog.d/4998.misc b/changelog.d/4998.misc
deleted file mode 100644
index 7caf959139..0000000000
--- a/changelog.d/4998.misc
+++ /dev/null
@@ -1 +0,0 @@
-Fix grammar in get_current_users_in_room and give it a docstring.
diff --git a/changelog.d/4999.bugfix b/changelog.d/4999.bugfix
deleted file mode 100644
index acbc191960..0000000000
--- a/changelog.d/4999.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Prevent the ability to kick users from a room they aren't in.
diff --git a/changelog.d/5001.misc b/changelog.d/5001.misc
deleted file mode 100644
index bf590a016d..0000000000
--- a/changelog.d/5001.misc
+++ /dev/null
@@ -1 +0,0 @@
-Clean up some code in the server-key Keyring.
\ No newline at end of file
diff --git a/changelog.d/5002.feature b/changelog.d/5002.feature
deleted file mode 100644
index d8f50e963f..0000000000
--- a/changelog.d/5002.feature
+++ /dev/null
@@ -1 +0,0 @@
-Add a delete group admin API.
diff --git a/changelog.d/5003.bugfix b/changelog.d/5003.bugfix
deleted file mode 100644
index 9955dc871f..0000000000
--- a/changelog.d/5003.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Fix issue #4596 so synapse_port_db script works with --curses option on Python 3. Contributed by Anders Jensen-Waud <anders@jensenwaud.com>.
diff --git a/changelog.d/5005.misc b/changelog.d/5005.misc
deleted file mode 100644
index f74147b352..0000000000
--- a/changelog.d/5005.misc
+++ /dev/null
@@ -1 +0,0 @@
-Convert SYNAPSE_NO_TLS Docker variable to boolean for user friendliness. Contributed by Gabriel Eckerson.
diff --git a/changelog.d/5007.misc b/changelog.d/5007.misc
deleted file mode 100644
index 05b6ce2c26..0000000000
--- a/changelog.d/5007.misc
+++ /dev/null
@@ -1 +0,0 @@
-Refactor synapse.storage._base._simple_select_list_paginate.
\ No newline at end of file
diff --git a/changelog.d/5009.bugfix b/changelog.d/5009.bugfix
deleted file mode 100644
index e66d3dd1aa..0000000000
--- a/changelog.d/5009.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Clients timing out/disappearing while downloading from the media repository will now no longer log a spurious "Producer was not unregistered" message.
\ No newline at end of file
diff --git a/changelog.d/5010.feature b/changelog.d/5010.feature
deleted file mode 100644
index 65ab198b71..0000000000
--- a/changelog.d/5010.feature
+++ /dev/null
@@ -1 +0,0 @@
-Add config option to block users from looking up 3PIDs.
diff --git a/changelog.d/5020.feature b/changelog.d/5020.feature
deleted file mode 100644
index 71f7a8db2e..0000000000
--- a/changelog.d/5020.feature
+++ /dev/null
@@ -1 +0,0 @@
-Add context to phonehome stats.
diff --git a/changelog.d/5024.misc b/changelog.d/5024.misc
deleted file mode 100644
index 07c13f28d0..0000000000
--- a/changelog.d/5024.misc
+++ /dev/null
@@ -1 +0,0 @@
-Store the notary server name correctly in server_keys_json.
diff --git a/changelog.d/5027.feature b/changelog.d/5027.feature
deleted file mode 100644
index 12766a82a7..0000000000
--- a/changelog.d/5027.feature
+++ /dev/null
@@ -1 +0,0 @@
-Add time-based account expiration.
diff --git a/changelog.d/5028.misc b/changelog.d/5028.misc
deleted file mode 100644
index 3ee4228c09..0000000000
--- a/changelog.d/5028.misc
+++ /dev/null
@@ -1 +0,0 @@
-Remove a number of unused tables from the database schema.
diff --git a/changelog.d/5030.misc b/changelog.d/5030.misc
deleted file mode 100644
index 3456eb5381..0000000000
--- a/changelog.d/5030.misc
+++ /dev/null
@@ -1 +0,0 @@
-Rewrite Datastore.get_server_verify_keys to reduce the number of database transactions.
diff --git a/changelog.d/5032.bugfix b/changelog.d/5032.bugfix
deleted file mode 100644
index cd71180ce9..0000000000
--- a/changelog.d/5032.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Fix "cannot import name execute_batch" error with postgres.
diff --git a/changelog.d/5033.misc b/changelog.d/5033.misc
deleted file mode 100644
index 3ee4228c09..0000000000
--- a/changelog.d/5033.misc
+++ /dev/null
@@ -1 +0,0 @@
-Remove a number of unused tables from the database schema.
diff --git a/changelog.d/5035.bugfix b/changelog.d/5035.bugfix
deleted file mode 100644
index 85e154027e..0000000000
--- a/changelog.d/5035.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Fix disappearing exceptions in manhole.
diff --git a/changelog.d/5037.bugfix b/changelog.d/5037.bugfix
deleted file mode 100644
index c3af418ddf..0000000000
--- a/changelog.d/5037.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Workaround bug in twisted where attempting too many concurrent DNS requests could cause it to hang due to running out of file descriptors.
diff --git a/changelog.d/5043.feature b/changelog.d/5043.feature
new file mode 100644
index 0000000000..0f1e0ee30e
--- /dev/null
+++ b/changelog.d/5043.feature
@@ -0,0 +1 @@
+Add ability to blacklist IP ranges for the federation client.
diff --git a/changelog.d/5046.misc b/changelog.d/5046.misc
deleted file mode 100644
index eb966a5ae6..0000000000
--- a/changelog.d/5046.misc
+++ /dev/null
@@ -1 +0,0 @@
-Remove extraneous period from copyright headers.
diff --git a/changelog.d/5047.feature b/changelog.d/5047.feature
deleted file mode 100644
index 12766a82a7..0000000000
--- a/changelog.d/5047.feature
+++ /dev/null
@@ -1 +0,0 @@
-Add time-based account expiration.
diff --git a/changelog.d/5063.feature b/changelog.d/5063.feature
deleted file mode 100644
index fd7b80018e..0000000000
--- a/changelog.d/5063.feature
+++ /dev/null
@@ -1 +0,0 @@
-Add support for handling /verions, /voip and /push_rules client endpoints to client_reader worker.
diff --git a/changelog.d/5065.feature b/changelog.d/5065.feature
deleted file mode 100644
index fd7b80018e..0000000000
--- a/changelog.d/5065.feature
+++ /dev/null
@@ -1 +0,0 @@
-Add support for handling /verions, /voip and /push_rules client endpoints to client_reader worker.
diff --git a/changelog.d/5067.misc b/changelog.d/5067.misc
deleted file mode 100644
index bbb4337dbf..0000000000
--- a/changelog.d/5067.misc
+++ /dev/null
@@ -1 +0,0 @@
-Update documentation for where to get Synapse packages.
diff --git a/changelog.d/5070.feature b/changelog.d/5070.feature
deleted file mode 100644
index fd7b80018e..0000000000
--- a/changelog.d/5070.feature
+++ /dev/null
@@ -1 +0,0 @@
-Add support for handling /verions, /voip and /push_rules client endpoints to client_reader worker.
diff --git a/changelog.d/5071.bugfix b/changelog.d/5071.bugfix
deleted file mode 100644
index ddf7ab5fa8..0000000000
--- a/changelog.d/5071.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Make sure we're not registering the same 3pid twice on registration.
diff --git a/changelog.d/5073.feature b/changelog.d/5073.feature
deleted file mode 100644
index 12766a82a7..0000000000
--- a/changelog.d/5073.feature
+++ /dev/null
@@ -1 +0,0 @@
-Add time-based account expiration.
diff --git a/changelog.d/5077.bugfix b/changelog.d/5077.bugfix
deleted file mode 100644
index f3345a6353..0000000000
--- a/changelog.d/5077.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Don't crash on lack of expiry templates.
diff --git a/changelog.d/5083.feature b/changelog.d/5083.feature
deleted file mode 100644
index 55d114b3fe..0000000000
--- a/changelog.d/5083.feature
+++ /dev/null
@@ -1 +0,0 @@
-Add an configuration option to require authentication on /publicRooms and /profile endpoints.
diff --git a/changelog.d/5098.misc b/changelog.d/5098.misc
deleted file mode 100644
index 9cd83bf226..0000000000
--- a/changelog.d/5098.misc
+++ /dev/null
@@ -1 +0,0 @@
-Add workarounds for pep-517 install errors.
diff --git a/changelog.d/5100.misc b/changelog.d/5100.misc
deleted file mode 100644
index db5eb1b156..0000000000
--- a/changelog.d/5100.misc
+++ /dev/null
@@ -1 +0,0 @@
-Improve logging when event-signature checks fail.
diff --git a/changelog.d/5103.bugfix b/changelog.d/5103.bugfix
deleted file mode 100644
index 590d80d58f..0000000000
--- a/changelog.d/5103.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Fix bug where presence updates were sent to all servers in a room when a new server joined, rather than to just the new server.
diff --git a/changelog.d/5104.bugfix b/changelog.d/5104.bugfix
deleted file mode 100644
index f88aca8a40..0000000000
--- a/changelog.d/5104.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Fix the ratelimting on third party invites.
diff --git a/changelog.d/5116.feature b/changelog.d/5116.feature
deleted file mode 100644
index dcbf7c1fb8..0000000000
--- a/changelog.d/5116.feature
+++ /dev/null
@@ -1 +0,0 @@
- Add time-based account expiration.
diff --git a/changelog.d/5119.feature b/changelog.d/5119.feature
deleted file mode 100644
index a3a73f09d3..0000000000
--- a/changelog.d/5119.feature
+++ /dev/null
@@ -1 +0,0 @@
-Move admin APIs to `/_synapse/admin/v1`. (The old paths are retained for backwards-compatibility, for now).
\ No newline at end of file
diff --git a/changelog.d/5120.misc b/changelog.d/5120.misc
deleted file mode 100644
index 6575f29322..0000000000
--- a/changelog.d/5120.misc
+++ /dev/null
@@ -1 +0,0 @@
-Factor out an "assert_requester_is_admin" function.
diff --git a/changelog.d/5121.feature b/changelog.d/5121.feature
deleted file mode 100644
index 54b228680d..0000000000
--- a/changelog.d/5121.feature
+++ /dev/null
@@ -1 +0,0 @@
-Implement an admin API for sending server notices. Many thanks to @krombel who provided a foundation for this work.
diff --git a/changelog.d/5122.misc b/changelog.d/5122.misc
deleted file mode 100644
index e1be8a6210..0000000000
--- a/changelog.d/5122.misc
+++ /dev/null
@@ -1 +0,0 @@
-Remove the requirement to authenticate for /admin/server_version.
diff --git a/changelog.d/5124.bugfix b/changelog.d/5124.bugfix
deleted file mode 100644
index 46df1e9fd5..0000000000
--- a/changelog.d/5124.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Add some missing limitations to room alias creation.
diff --git a/changelog.d/5128.bugfix b/changelog.d/5128.bugfix
deleted file mode 100644
index 46df1e9fd5..0000000000
--- a/changelog.d/5128.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Add some missing limitations to room alias creation.
diff --git a/changelog.d/5142.feature b/changelog.d/5142.feature
deleted file mode 100644
index 54b228680d..0000000000
--- a/changelog.d/5142.feature
+++ /dev/null
@@ -1 +0,0 @@
-Implement an admin API for sending server notices. Many thanks to @krombel who provided a foundation for this work.
diff --git a/changelog.d/5154.bugfix b/changelog.d/5154.bugfix
deleted file mode 100644
index c7bd5ee6cf..0000000000
--- a/changelog.d/5154.bugfix
+++ /dev/null
@@ -1 +0,0 @@
-Fix bogus imports in unit tests.
diff --git a/changelog.d/5171.misc b/changelog.d/5171.misc
new file mode 100644
index 0000000000..d148b03b51
--- /dev/null
+++ b/changelog.d/5171.misc
@@ -0,0 +1 @@
+Update tests to consistently be configured via the same code that is used when loading from configuration files.
diff --git a/contrib/systemd-with-workers/system/matrix-synapse-worker@.service b/contrib/systemd-with-workers/system/matrix-synapse-worker@.service
index 912984b9d2..9d980d5168 100644
--- a/contrib/systemd-with-workers/system/matrix-synapse-worker@.service
+++ b/contrib/systemd-with-workers/system/matrix-synapse-worker@.service
@@ -12,6 +12,7 @@ ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.%i --config-path=/
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=3
+SyslogIdentifier=matrix-synapse-%i
[Install]
WantedBy=matrix-synapse.service
diff --git a/contrib/systemd-with-workers/system/matrix-synapse.service b/contrib/systemd-with-workers/system/matrix-synapse.service
index 8bb4e400dc..3aae19034c 100644
--- a/contrib/systemd-with-workers/system/matrix-synapse.service
+++ b/contrib/systemd-with-workers/system/matrix-synapse.service
@@ -11,6 +11,7 @@ ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --confi
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=3
+SyslogIdentifier=matrix-synapse
[Install]
WantedBy=matrix.target
diff --git a/contrib/systemd/matrix-synapse.service b/contrib/systemd/matrix-synapse.service
index efb157e941..595b69916c 100644
--- a/contrib/systemd/matrix-synapse.service
+++ b/contrib/systemd/matrix-synapse.service
@@ -22,10 +22,10 @@ Group=nogroup
WorkingDirectory=/opt/synapse
ExecStart=/opt/synapse/env/bin/python -m synapse.app.homeserver --config-path=/opt/synapse/homeserver.yaml
+SyslogIdentifier=matrix-synapse
# adjust the cache factor if necessary
# Environment=SYNAPSE_CACHE_FACTOR=2.0
[Install]
WantedBy=multi-user.target
-
diff --git a/debian/changelog b/debian/changelog
index c25425bf26..454fa8eb16 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,10 @@
+matrix-synapse-py3 (0.99.3.2+nmu1) UNRELEASED; urgency=medium
+
+ [ Christoph Müller ]
+ * Configure the systemd units to have a log identifier of `matrix-synapse`
+
+ -- Christoph Müller <iblzm@hotmail.de> Wed, 17 Apr 2019 16:17:32 +0200
+
matrix-synapse-py3 (0.99.3.2) stable; urgency=medium
* New synapse release 0.99.3.2.
diff --git a/debian/matrix-synapse.service b/debian/matrix-synapse.service
index 942e4b83fe..b0a8d72e6d 100644
--- a/debian/matrix-synapse.service
+++ b/debian/matrix-synapse.service
@@ -11,6 +11,7 @@ ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --confi
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=3
+SyslogIdentifier=matrix-synapse
[Install]
WantedBy=multi-user.target
diff --git a/docs/metrics-howto.rst b/docs/metrics-howto.rst
index 5bbb5a4f3a..32b064e2da 100644
--- a/docs/metrics-howto.rst
+++ b/docs/metrics-howto.rst
@@ -48,7 +48,10 @@ How to monitor Synapse metrics using Prometheus
- job_name: "synapse"
metrics_path: "/_synapse/metrics"
static_configs:
- - targets: ["my.server.here:9092"]
+ - targets: ["my.server.here:port"]
+
+ where ``my.server.here`` is the IP address of Synapse, and ``port`` is the listener port
+ configured with the ``metrics`` resource.
If your prometheus is older than 1.5.2, you will need to replace
``static_configs`` in the above with ``target_groups``.
diff --git a/docs/reverse_proxy.rst b/docs/reverse_proxy.rst
index cc81ceb84b..7619b1097b 100644
--- a/docs/reverse_proxy.rst
+++ b/docs/reverse_proxy.rst
@@ -69,6 +69,7 @@ Let's assume that we expect clients to connect to our server at
SSLEngine on
ServerName matrix.example.com;
+ AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost>
@@ -77,6 +78,7 @@ Let's assume that we expect clients to connect to our server at
SSLEngine on
ServerName example.com;
+ AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost>
diff --git a/docs/sample_config.yaml b/docs/sample_config.yaml
index bdfc34c6bd..c4e5c4cf39 100644
--- a/docs/sample_config.yaml
+++ b/docs/sample_config.yaml
@@ -115,6 +115,24 @@ pid_file: DATADIR/homeserver.pid
# - nyc.example.com
# - syd.example.com
+# Prevent federation requests from being sent to the following
+# blacklist IP address CIDR ranges. If this option is not specified, or
+# specified with an empty list, no ip range blacklist will be enforced.
+#
+# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
+# listed here, since they correspond to unroutable addresses.)
+#
+federation_ip_range_blacklist:
+ - '127.0.0.0/8'
+ - '10.0.0.0/8'
+ - '172.16.0.0/12'
+ - '192.168.0.0/16'
+ - '100.64.0.0/10'
+ - '169.254.0.0/16'
+ - '::1/128'
+ - 'fe80::/64'
+ - 'fc00::/7'
+
# List of ports that Synapse should listen on, their purpose and their
# configuration.
#
diff --git a/synapse/__init__.py b/synapse/__init__.py
index 315fa96551..cd9cfb2409 100644
--- a/synapse/__init__.py
+++ b/synapse/__init__.py
@@ -27,4 +27,4 @@ try:
except ImportError:
pass
-__version__ = "0.99.3.2"
+__version__ = "0.99.4rc1"
diff --git a/synapse/config/server.py b/synapse/config/server.py
index 8dce75c56a..7874cd9da7 100644
--- a/synapse/config/server.py
+++ b/synapse/config/server.py
@@ -17,6 +17,8 @@
import logging
import os.path
+from netaddr import IPSet
+
from synapse.http.endpoint import parse_and_validate_server_name
from synapse.python_dependencies import DependencyException, check_requirements
@@ -137,6 +139,24 @@ class ServerConfig(Config):
for domain in federation_domain_whitelist:
self.federation_domain_whitelist[domain] = True
+ self.federation_ip_range_blacklist = config.get(
+ "federation_ip_range_blacklist", [],
+ )
+
+ # Attempt to create an IPSet from the given ranges
+ try:
+ self.federation_ip_range_blacklist = IPSet(
+ self.federation_ip_range_blacklist
+ )
+
+ # Always blacklist 0.0.0.0, ::
+ self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
+ except Exception as e:
+ raise ConfigError(
+ "Invalid range(s) provided in "
+ "federation_ip_range_blacklist: %s" % e
+ )
+
if self.public_baseurl is not None:
if self.public_baseurl[-1] != '/':
self.public_baseurl += '/'
@@ -386,6 +406,24 @@ class ServerConfig(Config):
# - nyc.example.com
# - syd.example.com
+ # Prevent federation requests from being sent to the following
+ # blacklist IP address CIDR ranges. If this option is not specified, or
+ # specified with an empty list, no ip range blacklist will be enforced.
+ #
+ # (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
+ # listed here, since they correspond to unroutable addresses.)
+ #
+ federation_ip_range_blacklist:
+ - '127.0.0.0/8'
+ - '10.0.0.0/8'
+ - '172.16.0.0/12'
+ - '192.168.0.0/16'
+ - '100.64.0.0/10'
+ - '169.254.0.0/16'
+ - '::1/128'
+ - 'fe80::/64'
+ - 'fc00::/7'
+
# List of ports that Synapse should listen on, their purpose and their
# configuration.
#
diff --git a/synapse/federation/sender/per_destination_queue.py b/synapse/federation/sender/per_destination_queue.py
index be99211003..fae8bea392 100644
--- a/synapse/federation/sender/per_destination_queue.py
+++ b/synapse/federation/sender/per_destination_queue.py
@@ -33,12 +33,14 @@ from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage import UserPresenceState
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
+# This is defined in the Matrix spec and enforced by the receiver.
+MAX_EDUS_PER_TRANSACTION = 100
+
logger = logging.getLogger(__name__)
sent_edus_counter = Counter(
- "synapse_federation_client_sent_edus",
- "Total number of EDUs successfully sent",
+ "synapse_federation_client_sent_edus", "Total number of EDUs successfully sent"
)
sent_edus_by_type = Counter(
@@ -58,6 +60,7 @@ class PerDestinationQueue(object):
destination (str): the server_name of the destination that we are managing
transmission for.
"""
+
def __init__(self, hs, transaction_manager, destination):
self._server_name = hs.hostname
self._clock = hs.get_clock()
@@ -68,17 +71,17 @@ class PerDestinationQueue(object):
self.transmission_loop_running = False
# a list of tuples of (pending pdu, order)
- self._pending_pdus = [] # type: list[tuple[EventBase, int]]
- self._pending_edus = [] # type: list[Edu]
+ self._pending_pdus = [] # type: list[tuple[EventBase, int]]
+ self._pending_edus = [] # type: list[Edu]
# Pending EDUs by their "key". Keyed EDUs are EDUs that get clobbered
# based on their key (e.g. typing events by room_id)
# Map of (edu_type, key) -> Edu
- self._pending_edus_keyed = {} # type: dict[tuple[str, str], Edu]
+ self._pending_edus_keyed = {} # type: dict[tuple[str, str], Edu]
# Map of user_id -> UserPresenceState of pending presence to be sent to this
# destination
- self._pending_presence = {} # type: dict[str, UserPresenceState]
+ self._pending_presence = {} # type: dict[str, UserPresenceState]
# room_id -> receipt_type -> user_id -> receipt_dict
self._pending_rrs = {}
@@ -120,9 +123,7 @@ class PerDestinationQueue(object):
Args:
states (iterable[UserPresenceState]): presence to send
"""
- self._pending_presence.update({
- state.user_id: state for state in states
- })
+ self._pending_presence.update({state.user_id: state for state in states})
self.attempt_new_transaction()
def queue_read_receipt(self, receipt):
@@ -132,14 +133,9 @@ class PerDestinationQueue(object):
Args:
receipt (synapse.api.receipt_info.ReceiptInfo): receipt to be queued
"""
- self._pending_rrs.setdefault(
- receipt.room_id, {},
- ).setdefault(
+ self._pending_rrs.setdefault(receipt.room_id, {}).setdefault(
receipt.receipt_type, {}
- )[receipt.user_id] = {
- "event_ids": receipt.event_ids,
- "data": receipt.data,
- }
+ )[receipt.user_id] = {"event_ids": receipt.event_ids, "data": receipt.data}
def flush_read_receipts_for_room(self, room_id):
# if we don't have any read-receipts for this room, it may be that we've already
@@ -170,10 +166,7 @@ class PerDestinationQueue(object):
# request at which point pending_pdus just keeps growing.
# we need application-layer timeouts of some flavour of these
# requests
- logger.debug(
- "TX [%s] Transaction already in progress",
- self._destination
- )
+ logger.debug("TX [%s] Transaction already in progress", self._destination)
return
logger.debug("TX [%s] Starting transaction loop", self._destination)
@@ -197,7 +190,8 @@ class PerDestinationQueue(object):
pending_pdus = []
while True:
device_message_edus, device_stream_id, dev_list_id = (
- yield self._get_new_device_messages()
+ # We have to keep 2 free slots for presence and rr_edus
+ yield self._get_new_device_messages(MAX_EDUS_PER_TRANSACTION - 2)
)
# BEGIN CRITICAL SECTION
@@ -216,19 +210,9 @@ class PerDestinationQueue(object):
pending_edus = []
- pending_edus.extend(self._get_rr_edus(force_flush=False))
-
# We can only include at most 100 EDUs per transactions
- pending_edus.extend(self._pop_pending_edus(100 - len(pending_edus)))
-
- pending_edus.extend(
- self._pending_edus_keyed.values()
- )
-
- self._pending_edus_keyed = {}
-
- pending_edus.extend(device_message_edus)
-
+ # rr_edus and pending_presence take at most one slot each
+ pending_edus.extend(self._get_rr_edus(force_flush=False))
pending_presence = self._pending_presence
self._pending_presence = {}
if pending_presence:
@@ -248,9 +232,23 @@ class PerDestinationQueue(object):
)
)
+ pending_edus.extend(device_message_edus)
+ pending_edus.extend(
+ self._pop_pending_edus(MAX_EDUS_PER_TRANSACTION - len(pending_edus))
+ )
+ while (
+ len(pending_edus) < MAX_EDUS_PER_TRANSACTION
+ and self._pending_edus_keyed
+ ):
+ _, val = self._pending_edus_keyed.popitem()
+ pending_edus.append(val)
+
if pending_pdus:
- logger.debug("TX [%s] len(pending_pdus_by_dest[dest]) = %d",
- self._destination, len(pending_pdus))
+ logger.debug(
+ "TX [%s] len(pending_pdus_by_dest[dest]) = %d",
+ self._destination,
+ len(pending_pdus),
+ )
if not pending_pdus and not pending_edus:
logger.debug("TX [%s] Nothing to send", self._destination)
@@ -259,7 +257,7 @@ class PerDestinationQueue(object):
# if we've decided to send a transaction anyway, and we have room, we
# may as well send any pending RRs
- if len(pending_edus) < 100:
+ if len(pending_edus) < MAX_EDUS_PER_TRANSACTION:
pending_edus.extend(self._get_rr_edus(force_flush=True))
# END CRITICAL SECTION
@@ -303,22 +301,25 @@ class PerDestinationQueue(object):
except HttpResponseException as e:
logger.warning(
"TX [%s] Received %d response to transaction: %s",
- self._destination, e.code, e,
+ self._destination,
+ e.code,
+ e,
)
except RequestSendFailed as e:
- logger.warning("TX [%s] Failed to send transaction: %s", self._destination, e)
+ logger.warning(
+ "TX [%s] Failed to send transaction: %s", self._destination, e
+ )
for p, _ in pending_pdus:
- logger.info("Failed to send event %s to %s", p.event_id,
- self._destination)
+ logger.info(
+ "Failed to send event %s to %s", p.event_id, self._destination
+ )
except Exception:
- logger.exception(
- "TX [%s] Failed to send transaction",
- self._destination,
- )
+ logger.exception("TX [%s] Failed to send transaction", self._destination)
for p, _ in pending_pdus:
- logger.info("Failed to send event %s to %s", p.event_id,
- self._destination)
+ logger.info(
+ "Failed to send event %s to %s", p.event_id, self._destination
+ )
finally:
# We want to be *very* sure we clear this after we stop processing
self.transmission_loop_running = False
@@ -346,33 +347,40 @@ class PerDestinationQueue(object):
return pending_edus
@defer.inlineCallbacks
- def _get_new_device_messages(self):
- last_device_stream_id = self._last_device_stream_id
- to_device_stream_id = self._store.get_to_device_stream_token()
- contents, stream_id = yield self._store.get_new_device_msgs_for_remote(
- self._destination, last_device_stream_id, to_device_stream_id
+ def _get_new_device_messages(self, limit):
+ last_device_list = self._last_device_list_stream_id
+ # Will return at most 20 entries
+ now_stream_id, results = yield self._store.get_devices_by_remote(
+ self._destination, last_device_list
)
edus = [
Edu(
origin=self._server_name,
destination=self._destination,
- edu_type="m.direct_to_device",
+ edu_type="m.device_list_update",
content=content,
)
- for content in contents
+ for content in results
]
- last_device_list = self._last_device_list_stream_id
- now_stream_id, results = yield self._store.get_devices_by_remote(
- self._destination, last_device_list
+ assert len(edus) <= limit, "get_devices_by_remote returned too many EDUs"
+
+ last_device_stream_id = self._last_device_stream_id
+ to_device_stream_id = self._store.get_to_device_stream_token()
+ contents, stream_id = yield self._store.get_new_device_msgs_for_remote(
+ self._destination,
+ last_device_stream_id,
+ to_device_stream_id,
+ limit - len(edus),
)
edus.extend(
Edu(
origin=self._server_name,
destination=self._destination,
- edu_type="m.device_list_update",
+ edu_type="m.direct_to_device",
content=content,
)
- for content in results
+ for content in contents
)
+
defer.returnValue((edus, stream_id, now_stream_id))
diff --git a/synapse/http/client.py b/synapse/http/client.py
index ad454f4964..77fe68818b 100644
--- a/synapse/http/client.py
+++ b/synapse/http/client.py
@@ -90,45 +90,50 @@ class IPBlacklistingResolver(object):
def resolveHostName(self, recv, hostname, portNumber=0):
r = recv()
- d = defer.Deferred()
addresses = []
- @provider(IResolutionReceiver)
- class EndpointReceiver(object):
- @staticmethod
- def resolutionBegan(resolutionInProgress):
- pass
+ def _callback():
+ r.resolutionBegan(None)
- @staticmethod
- def addressResolved(address):
- ip_address = IPAddress(address.host)
+ has_bad_ip = False
+ for i in addresses:
+ ip_address = IPAddress(i.host)
if check_against_blacklist(
ip_address, self._ip_whitelist, self._ip_blacklist
):
logger.info(
- "Dropped %s from DNS resolution to %s" % (ip_address, hostname)
+ "Dropped %s from DNS resolution to %s due to blacklist" %
+ (ip_address, hostname)
)
- raise SynapseError(403, "IP address blocked by IP blacklist entry")
+ has_bad_ip = True
+
+ # if we have a blacklisted IP, we'd like to raise an error to block the
+ # request, but all we can really do from here is claim that there were no
+ # valid results.
+ if not has_bad_ip:
+ for i in addresses:
+ r.addressResolved(i)
+ r.resolutionComplete()
+ @provider(IResolutionReceiver)
+ class EndpointReceiver(object):
+ @staticmethod
+ def resolutionBegan(resolutionInProgress):
+ pass
+
+ @staticmethod
+ def addressResolved(address):
addresses.append(address)
@staticmethod
def resolutionComplete():
- d.callback(addresses)
+ _callback()
self._reactor.nameResolver.resolveHostName(
EndpointReceiver, hostname, portNumber=portNumber
)
- def _callback(addrs):
- r.resolutionBegan(None)
- for i in addrs:
- r.addressResolved(i)
- r.resolutionComplete()
-
- d.addCallback(_callback)
-
return r
@@ -160,7 +165,8 @@ class BlacklistingAgentWrapper(Agent):
ip_address, self._ip_whitelist, self._ip_blacklist
):
logger.info(
- "Blocking access to %s because of blacklist" % (ip_address,)
+ "Blocking access to %s due to blacklist" %
+ (ip_address,)
)
e = SynapseError(403, "IP address blocked by IP blacklist entry")
return defer.fail(Failure(e))
@@ -258,9 +264,6 @@ class SimpleHttpClient(object):
uri (str): URI to query.
data (bytes): Data to send in the request body, if applicable.
headers (t.w.http_headers.Headers): Request headers.
-
- Raises:
- SynapseError: If the IP is blacklisted.
"""
# A small wrapper around self.agent.request() so we can easily attach
# counters to it
diff --git a/synapse/http/matrixfederationclient.py b/synapse/http/matrixfederationclient.py
index ff63d0b2a8..7eefc7b1fc 100644
--- a/synapse/http/matrixfederationclient.py
+++ b/synapse/http/matrixfederationclient.py
@@ -27,9 +27,11 @@ import treq
from canonicaljson import encode_canonical_json
from prometheus_client import Counter
from signedjson.sign import sign_json
+from zope.interface import implementer
from twisted.internet import defer, protocol
from twisted.internet.error import DNSLookupError
+from twisted.internet.interfaces import IReactorPluggableNameResolver
from twisted.internet.task import _EPSILON, Cooperator
from twisted.web._newclient import ResponseDone
from twisted.web.http_headers import Headers
@@ -44,6 +46,7 @@ from synapse.api.errors import (
SynapseError,
)
from synapse.http import QuieterFileBodyProducer
+from synapse.http.client import BlacklistingAgentWrapper, IPBlacklistingResolver
from synapse.http.federation.matrix_federation_agent import MatrixFederationAgent
from synapse.util.async_helpers import timeout_deferred
from synapse.util.logcontext import make_deferred_yieldable
@@ -172,19 +175,44 @@ class MatrixFederationHttpClient(object):
self.hs = hs
self.signing_key = hs.config.signing_key[0]
self.server_name = hs.hostname
- reactor = hs.get_reactor()
+
+ real_reactor = hs.get_reactor()
+
+ # We need to use a DNS resolver which filters out blacklisted IP
+ # addresses, to prevent DNS rebinding.
+ nameResolver = IPBlacklistingResolver(
+ real_reactor, None, hs.config.federation_ip_range_blacklist,
+ )
+
+ @implementer(IReactorPluggableNameResolver)
+ class Reactor(object):
+ def __getattr__(_self, attr):
+ if attr == "nameResolver":
+ return nameResolver
+ else:
+ return getattr(real_reactor, attr)
+
+ self.reactor = Reactor()
self.agent = MatrixFederationAgent(
- hs.get_reactor(),
+ self.reactor,
tls_client_options_factory,
)
+
+ # Use a BlacklistingAgentWrapper to prevent circumventing the IP
+ # blacklist via IP literals in server names
+ self.agent = BlacklistingAgentWrapper(
+ self.agent, self.reactor,
+ ip_blacklist=hs.config.federation_ip_range_blacklist,
+ )
+
self.clock = hs.get_clock()
self._store = hs.get_datastore()
self.version_string_bytes = hs.version_string.encode('ascii')
self.default_timeout = 60
def schedule(x):
- reactor.callLater(_EPSILON, x)
+ self.reactor.callLater(_EPSILON, x)
self._cooperator = Cooperator(scheduler=schedule)
@@ -370,7 +398,7 @@ class MatrixFederationHttpClient(object):
request_deferred = timeout_deferred(
request_deferred,
timeout=_sec_timeout,
- reactor=self.hs.get_reactor(),
+ reactor=self.reactor,
)
response = yield request_deferred
@@ -397,7 +425,7 @@ class MatrixFederationHttpClient(object):
d = timeout_deferred(
d,
timeout=_sec_timeout,
- reactor=self.hs.get_reactor(),
+ reactor=self.reactor,
)
try:
@@ -586,7 +614,7 @@ class MatrixFederationHttpClient(object):
)
body = yield _handle_json_response(
- self.hs.get_reactor(), self.default_timeout, request, response,
+ self.reactor, self.default_timeout, request, response,
)
defer.returnValue(body)
@@ -645,7 +673,7 @@ class MatrixFederationHttpClient(object):
_sec_timeout = self.default_timeout
body = yield _handle_json_response(
- self.hs.get_reactor(), _sec_timeout, request, response,
+ self.reactor, _sec_timeout, request, response,
)
defer.returnValue(body)
@@ -704,7 +732,7 @@ class MatrixFederationHttpClient(object):
)
body = yield _handle_json_response(
- self.hs.get_reactor(), self.default_timeout, request, response,
+ self.reactor, self.default_timeout, request, response,
)
defer.returnValue(body)
@@ -753,7 +781,7 @@ class MatrixFederationHttpClient(object):
)
body = yield _handle_json_response(
- self.hs.get_reactor(), self.default_timeout, request, response,
+ self.reactor, self.default_timeout, request, response,
)
defer.returnValue(body)
@@ -801,7 +829,7 @@ class MatrixFederationHttpClient(object):
try:
d = _readBodyToFile(response, output_stream, max_size)
- d.addTimeout(self.default_timeout, self.hs.get_reactor())
+ d.addTimeout(self.default_timeout, self.reactor)
length = yield make_deferred_yieldable(d)
except Exception as e:
logger.warn(
diff --git a/synapse/rest/media/v1/preview_url_resource.py b/synapse/rest/media/v1/preview_url_resource.py
index ba3ab1d37d..acf87709f2 100644
--- a/synapse/rest/media/v1/preview_url_resource.py
+++ b/synapse/rest/media/v1/preview_url_resource.py
@@ -31,6 +31,7 @@ from six.moves import urllib_parse as urlparse
from canonicaljson import json
from twisted.internet import defer
+from twisted.internet.error import DNSLookupError
from twisted.web.resource import Resource
from twisted.web.server import NOT_DONE_YET
@@ -328,9 +329,18 @@ class PreviewUrlResource(Resource):
# handler will return a SynapseError to the client instead of
# blank data or a 500.
raise
+ except DNSLookupError:
+ # DNS lookup returned no results
+ # Note: This will also be the case if one of the resolved IP
+ # addresses is blacklisted
+ raise SynapseError(
+ 502, "DNS resolution failure during URL preview generation",
+ Codes.UNKNOWN
+ )
except Exception as e:
# FIXME: pass through 404s and other error messages nicely
logger.warn("Error downloading %s: %r", url, e)
+
raise SynapseError(
500, "Failed to download content: %s" % (
traceback.format_exception_only(sys.exc_info()[0], e),
diff --git a/synapse/rest/media/v1/storage_provider.py b/synapse/rest/media/v1/storage_provider.py
index 5aa03031f6..d90cbfb56a 100644
--- a/synapse/rest/media/v1/storage_provider.py
+++ b/synapse/rest/media/v1/storage_provider.py
@@ -108,6 +108,7 @@ class FileStorageProviderBackend(StorageProvider):
"""
def __init__(self, hs, config):
+ self.hs = hs
self.cache_directory = hs.config.media_store_path
self.base_directory = config
diff --git a/synapse/server.pyi b/synapse/server.pyi
index 3ba3a967c2..9583e82d52 100644
--- a/synapse/server.pyi
+++ b/synapse/server.pyi
@@ -18,7 +18,6 @@ import synapse.server_notices.server_notices_sender
import synapse.state
import synapse.storage
-
class HomeServer(object):
@property
def config(self) -> synapse.config.homeserver.HomeServerConfig:
diff --git a/synapse/storage/deviceinbox.py b/synapse/storage/deviceinbox.py
index fed4ea3610..9b0a99cb49 100644
--- a/synapse/storage/deviceinbox.py
+++ b/synapse/storage/deviceinbox.py
@@ -118,7 +118,7 @@ class DeviceInboxWorkerStore(SQLBaseStore):
defer.returnValue(count)
def get_new_device_msgs_for_remote(
- self, destination, last_stream_id, current_stream_id, limit=100
+ self, destination, last_stream_id, current_stream_id, limit
):
"""
Args:
diff --git a/tests/api/test_filtering.py b/tests/api/test_filtering.py
index 2a7044801a..6ba623de13 100644
--- a/tests/api/test_filtering.py
+++ b/tests/api/test_filtering.py
@@ -109,7 +109,6 @@ class FilteringTestCase(unittest.TestCase):
"event_format": "client",
"event_fields": ["type", "content", "sender"],
},
-
# a single backslash should be permitted (though it is debatable whether
# it should be permitted before anything other than `.`, and what that
# actually means)
diff --git a/tests/api/test_ratelimiting.py b/tests/api/test_ratelimiting.py
index 30a255d441..dbdd427cac 100644
--- a/tests/api/test_ratelimiting.py
+++ b/tests/api/test_ratelimiting.py
@@ -10,19 +10,19 @@ class TestRatelimiter(unittest.TestCase):
key="test_id", time_now_s=0, rate_hz=0.1, burst_count=1
)
self.assertTrue(allowed)
- self.assertEquals(10., time_allowed)
+ self.assertEquals(10.0, time_allowed)
allowed, time_allowed = limiter.can_do_action(
key="test_id", time_now_s=5, rate_hz=0.1, burst_count=1
)
self.assertFalse(allowed)
- self.assertEquals(10., time_allowed)
+ self.assertEquals(10.0, time_allowed)
allowed, time_allowed = limiter.can_do_action(
key="test_id", time_now_s=10, rate_hz=0.1, burst_count=1
)
self.assertTrue(allowed)
- self.assertEquals(20., time_allowed)
+ self.assertEquals(20.0, time_allowed)
def test_pruning(self):
limiter = Ratelimiter()
diff --git a/tests/app/test_openid_listener.py b/tests/app/test_openid_listener.py
index 590abc1e92..48792d1480 100644
--- a/tests/app/test_openid_listener.py
+++ b/tests/app/test_openid_listener.py
@@ -25,16 +25,18 @@ from tests.unittest import HomeserverTestCase
class FederationReaderOpenIDListenerTests(HomeserverTestCase):
def make_homeserver(self, reactor, clock):
hs = self.setup_test_homeserver(
- http_client=None, homeserverToUse=FederationReaderServer,
+ http_client=None, homeserverToUse=FederationReaderServer
)
return hs
- @parameterized.expand([
- (["federation"], "auth_fail"),
- ([], "no_resource"),
- (["openid", "federation"], "auth_fail"),
- (["openid"], "auth_fail"),
- ])
+ @parameterized.expand(
+ [
+ (["federation"], "auth_fail"),
+ ([], "no_resource"),
+ (["openid", "federation"], "auth_fail"),
+ (["openid"], "auth_fail"),
+ ]
+ )
def test_openid_listener(self, names, expectation):
"""
Test different openid listener configurations.
@@ -53,17 +55,14 @@ class FederationReaderOpenIDListenerTests(HomeserverTestCase):
# Grab the resource from the site that was told to listen
site = self.reactor.tcpServers[0][1]
try:
- self.resource = (
- site.resource.children[b"_matrix"].children[b"federation"]
- )
+ self.resource = site.resource.children[b"_matrix"].children[b"federation"]
except KeyError:
if expectation == "no_resource":
return
raise
request, channel = self.make_request(
- "GET",
- "/_matrix/federation/v1/openid/userinfo",
+ "GET", "/_matrix/federation/v1/openid/userinfo"
)
self.render(request)
@@ -74,16 +73,18 @@ class FederationReaderOpenIDListenerTests(HomeserverTestCase):
class SynapseHomeserverOpenIDListenerTests(HomeserverTestCase):
def make_homeserver(self, reactor, clock):
hs = self.setup_test_homeserver(
- http_client=None, homeserverToUse=SynapseHomeServer,
+ http_client=None, homeserverToUse=SynapseHomeServer
)
return hs
- @parameterized.expand([
- (["federation"], "auth_fail"),
- ([], "no_resource"),
- (["openid", "federation"], "auth_fail"),
- (["openid"], "auth_fail"),
- ])
+ @parameterized.expand(
+ [
+ (["federation"], "auth_fail"),
+ ([], "no_resource"),
+ (["openid", "federation"], "auth_fail"),
+ (["openid"], "auth_fail"),
+ ]
+ )
def test_openid_listener(self, names, expectation):
"""
Test different openid listener configurations.
@@ -102,17 +103,14 @@ class SynapseHomeserverOpenIDListenerTests(HomeserverTestCase):
# Grab the resource from the site that was told to listen
site = self.reactor.tcpServers[0][1]
try:
- self.resource = (
- site.resource.children[b"_matrix"].children[b"federation"]
- )
+ self.resource = site.resource.children[b"_matrix"].children[b"federation"]
except KeyError:
if expectation == "no_resource":
return
raise
request, channel = self.make_request(
- "GET",
- "/_matrix/federation/v1/openid/userinfo",
+ "GET", "/_matrix/federation/v1/openid/userinfo"
)
self.render(request)
diff --git a/tests/config/test_generate.py b/tests/config/test_generate.py
index 795b4c298d..5017cbce85 100644
--- a/tests/config/test_generate.py
+++ b/tests/config/test_generate.py
@@ -45,13 +45,7 @@ class ConfigGenerationTestCase(unittest.TestCase):
)
self.assertSetEqual(
- set(
- [
- "homeserver.yaml",
- "lemurs.win.log.config",
- "lemurs.win.signing.key",
- ]
- ),
+ set(["homeserver.yaml", "lemurs.win.log.config", "lemurs.win.signing.key"]),
set(os.listdir(self.dir)),
)
diff --git a/tests/config/test_room_directory.py b/tests/config/test_room_directory.py
index 47fffcfeb2..0ec10019b3 100644
--- a/tests/config/test_room_directory.py
+++ b/tests/config/test_room_directory.py
@@ -22,7 +22,8 @@ from tests import unittest
class RoomDirectoryConfigTestCase(unittest.TestCase):
def test_alias_creation_acl(self):
- config = yaml.safe_load("""
+ config = yaml.safe_load(
+ """
alias_creation_rules:
- user_id: "*bob*"
alias: "*"
@@ -38,43 +39,49 @@ class RoomDirectoryConfigTestCase(unittest.TestCase):
action: "allow"
room_list_publication_rules: []
- """)
+ """
+ )
rd_config = RoomDirectoryConfig()
rd_config.read_config(config)
- self.assertFalse(rd_config.is_alias_creation_allowed(
- user_id="@bob:example.com",
- room_id="!test",
- alias="#test:example.com",
- ))
-
- self.assertTrue(rd_config.is_alias_creation_allowed(
- user_id="@test:example.com",
- room_id="!test",
- alias="#unofficial_st:example.com",
- ))
-
- self.assertTrue(rd_config.is_alias_creation_allowed(
- user_id="@foobar:example.com",
- room_id="!test",
- alias="#test:example.com",
- ))
-
- self.assertTrue(rd_config.is_alias_creation_allowed(
- user_id="@gah:example.com",
- room_id="!test",
- alias="#goo:example.com",
- ))
-
- self.assertFalse(rd_config.is_alias_creation_allowed(
- user_id="@test:example.com",
- room_id="!test",
- alias="#test:example.com",
- ))
+ self.assertFalse(
+ rd_config.is_alias_creation_allowed(
+ user_id="@bob:example.com", room_id="!test", alias="#test:example.com"
+ )
+ )
+
+ self.assertTrue(
+ rd_config.is_alias_creation_allowed(
+ user_id="@test:example.com",
+ room_id="!test",
+ alias="#unofficial_st:example.com",
+ )
+ )
+
+ self.assertTrue(
+ rd_config.is_alias_creation_allowed(
+ user_id="@foobar:example.com",
+ room_id="!test",
+ alias="#test:example.com",
+ )
+ )
+
+ self.assertTrue(
+ rd_config.is_alias_creation_allowed(
+ user_id="@gah:example.com", room_id="!test", alias="#goo:example.com"
+ )
+ )
+
+ self.assertFalse(
+ rd_config.is_alias_creation_allowed(
+ user_id="@test:example.com", room_id="!test", alias="#test:example.com"
+ )
+ )
def test_room_publish_acl(self):
- config = yaml.safe_load("""
+ config = yaml.safe_load(
+ """
alias_creation_rules: []
room_list_publication_rules:
@@ -92,55 +99,66 @@ class RoomDirectoryConfigTestCase(unittest.TestCase):
action: "allow"
- room_id: "!test-deny"
action: "deny"
- """)
+ """
+ )
rd_config = RoomDirectoryConfig()
rd_config.read_config(config)
- self.assertFalse(rd_config.is_publishing_room_allowed(
- user_id="@bob:example.com",
- room_id="!test",
- aliases=["#test:example.com"],
- ))
-
- self.assertTrue(rd_config.is_publishing_room_allowed(
- user_id="@test:example.com",
- room_id="!test",
- aliases=["#unofficial_st:example.com"],
- ))
-
- self.assertTrue(rd_config.is_publishing_room_allowed(
- user_id="@foobar:example.com",
- room_id="!test",
- aliases=[],
- ))
-
- self.assertTrue(rd_config.is_publishing_room_allowed(
- user_id="@gah:example.com",
- room_id="!test",
- aliases=["#goo:example.com"],
- ))
-
- self.assertFalse(rd_config.is_publishing_room_allowed(
- user_id="@test:example.com",
- room_id="!test",
- aliases=["#test:example.com"],
- ))
-
- self.assertTrue(rd_config.is_publishing_room_allowed(
- user_id="@foobar:example.com",
- room_id="!test-deny",
- aliases=[],
- ))
-
- self.assertFalse(rd_config.is_publishing_room_allowed(
- user_id="@gah:example.com",
- room_id="!test-deny",
- aliases=[],
- ))
-
- self.assertTrue(rd_config.is_publishing_room_allowed(
- user_id="@test:example.com",
- room_id="!test",
- aliases=["#unofficial_st:example.com", "#blah:example.com"],
- ))
+ self.assertFalse(
+ rd_config.is_publishing_room_allowed(
+ user_id="@bob:example.com",
+ room_id="!test",
+ aliases=["#test:example.com"],
+ )
+ )
+
+ self.assertTrue(
+ rd_config.is_publishing_room_allowed(
+ user_id="@test:example.com",
+ room_id="!test",
+ aliases=["#unofficial_st:example.com"],
+ )
+ )
+
+ self.assertTrue(
+ rd_config.is_publishing_room_allowed(
+ user_id="@foobar:example.com", room_id="!test", aliases=[]
+ )
+ )
+
+ self.assertTrue(
+ rd_config.is_publishing_room_allowed(
+ user_id="@gah:example.com",
+ room_id="!test",
+ aliases=["#goo:example.com"],
+ )
+ )
+
+ self.assertFalse(
+ rd_config.is_publishing_room_allowed(
+ user_id="@test:example.com",
+ room_id="!test",
+ aliases=["#test:example.com"],
+ )
+ )
+
+ self.assertTrue(
+ rd_config.is_publishing_room_allowed(
+ user_id="@foobar:example.com", room_id="!test-deny", aliases=[]
+ )
+ )
+
+ self.assertFalse(
+ rd_config.is_publishing_room_allowed(
+ user_id="@gah:example.com", room_id="!test-deny", aliases=[]
+ )
+ )
+
+ self.assertTrue(
+ rd_config.is_publishing_room_allowed(
+ user_id="@test:example.com",
+ room_id="!test",
+ aliases=["#unofficial_st:example.com", "#blah:example.com"],
+ )
+ )
diff --git a/tests/config/test_server.py b/tests/config/test_server.py
index f5836d73ac..de64965a60 100644
--- a/tests/config/test_server.py
+++ b/tests/config/test_server.py
@@ -19,7 +19,6 @@ from tests import unittest
class ServerConfigTestCase(unittest.TestCase):
-
def test_is_threepid_reserved(self):
user1 = {'medium': 'email', 'address': 'user1@example.com'}
user2 = {'medium': 'email', 'address': 'user2@example.com'}
diff --git a/tests/config/test_tls.py b/tests/config/test_tls.py
index c260d3359f..40ca428778 100644
--- a/tests/config/test_tls.py
+++ b/tests/config/test_tls.py
@@ -26,7 +26,6 @@ class TestConfig(TlsConfig):
class TLSConfigTests(TestCase):
-
def test_warn_self_signed(self):
"""
Synapse will give a warning when it loads a self-signed certificate.
@@ -34,7 +33,8 @@ class TLSConfigTests(TestCase):
config_dir = self.mktemp()
os.mkdir(config_dir)
with open(os.path.join(config_dir, "cert.pem"), 'w') as f:
- f.write("""-----BEGIN CERTIFICATE-----
+ f.write(
+ """-----BEGIN CERTIFICATE-----
MIID6DCCAtACAws9CjANBgkqhkiG9w0BAQUFADCBtzELMAkGA1UEBhMCVFIxDzAN
BgNVBAgMBsOHb3J1bTEUMBIGA1UEBwwLQmHFn21ha8OnxLExEjAQBgNVBAMMCWxv
Y2FsaG9zdDEcMBoGA1UECgwTVHdpc3RlZCBNYXRyaXggTGFiczEkMCIGA1UECwwb
@@ -56,11 +56,12 @@ I8OtG1xGwcok53lyDuuUUDexnK4O5BkjKiVlNPg4HPim5Kuj2hRNFfNt/F2BVIlj
iZupikC5MT1LQaRwidkSNxCku1TfAyueiBwhLnFwTmIGNnhuDCutEVAD9kFmcJN2
SznugAcPk4doX2+rL+ila+ThqgPzIkwTUHtnmjI0TI6xsDUlXz5S3UyudrE2Qsfz
s4niecZKPBizL6aucT59CsunNmmb5Glq8rlAcU+1ZTZZzGYqVYhF6axB9Qg=
------END CERTIFICATE-----""")
+-----END CERTIFICATE-----"""
+ )
config = {
"tls_certificate_path": os.path.join(config_dir, "cert.pem"),
- "tls_fingerprints": []
+ "tls_fingerprints": [],
}
t = TestConfig()
@@ -75,5 +76,5 @@ s4niecZKPBizL6aucT59CsunNmmb5Glq8rlAcU+1ZTZZzGYqVYhF6axB9Qg=
"Self-signed TLS certificates will not be accepted by "
"Synapse 1.0. Please either provide a valid certificate, "
"or use Synapse's ACME support to provision one."
- )
+ ),
)
diff --git a/tests/crypto/test_keyring.py b/tests/crypto/test_keyring.py
index f5bd7a1aa1..3c79d4afe7 100644
--- a/tests/crypto/test_keyring.py
+++ b/tests/crypto/test_keyring.py
@@ -169,7 +169,7 @@ class KeyringTestCase(unittest.HomeserverTestCase):
self.http_client.post_json.return_value = defer.Deferred()
res_deferreds_2 = kr.verify_json_objects_for_server(
- [("server10", json1, )]
+ [("server10", json1)]
)
res_deferreds_2[0].addBoth(self.check_context, None)
yield logcontext.make_deferred_yieldable(res_deferreds_2[0])
@@ -345,6 +345,7 @@ def _verify_json_for_server(keyring, server_name, json_object):
"""thin wrapper around verify_json_for_server which makes sure it is wrapped
with the patched defer.inlineCallbacks.
"""
+
@defer.inlineCallbacks
def v():
rv1 = yield keyring.verify_json_for_server(server_name, json_object)
diff --git a/tests/federation/test_federation_sender.py b/tests/federation/test_federation_sender.py
index 28e7e27416..7bb106b5f7 100644
--- a/tests/federation/test_federation_sender.py
+++ b/tests/federation/test_federation_sender.py
@@ -33,11 +33,15 @@ class FederationSenderTestCases(HomeserverTestCase):
mock_state_handler = self.hs.get_state_handler()
mock_state_handler.get_current_hosts_in_room.return_value = ["test", "host2"]
- mock_send_transaction = self.hs.get_federation_transport_client().send_transaction
+ mock_send_transaction = (
+ self.hs.get_federation_transport_client().send_transaction
+ )
mock_send_transaction.return_value = defer.succeed({})
sender = self.hs.get_federation_sender()
- receipt = ReadReceipt("room_id", "m.read", "user_id", ["event_id"], {"ts": 1234})
+ receipt = ReadReceipt(
+ "room_id", "m.read", "user_id", ["event_id"], {"ts": 1234}
+ )
self.successResultOf(sender.send_read_receipt(receipt))
self.pump()
@@ -46,21 +50,24 @@ class FederationSenderTestCases(HomeserverTestCase):
mock_send_transaction.assert_called_once()
json_cb = mock_send_transaction.call_args[0][1]
data = json_cb()
- self.assertEqual(data['edus'], [
- {
- 'edu_type': 'm.receipt',
- 'content': {
- 'room_id': {
- 'm.read': {
- 'user_id': {
- 'event_ids': ['event_id'],
- 'data': {'ts': 1234},
- },
- },
+ self.assertEqual(
+ data['edus'],
+ [
+ {
+ 'edu_type': 'm.receipt',
+ 'content': {
+ 'room_id': {
+ 'm.read': {
+ 'user_id': {
+ 'event_ids': ['event_id'],
+ 'data': {'ts': 1234},
+ }
+ }
+ }
},
- },
- },
- ])
+ }
+ ],
+ )
def test_send_receipts_with_backoff(self):
"""Send two receipts in quick succession; the second should be flushed, but
@@ -68,11 +75,15 @@ class FederationSenderTestCases(HomeserverTestCase):
mock_state_handler = self.hs.get_state_handler()
mock_state_handler.get_current_hosts_in_room.return_value = ["test", "host2"]
- mock_send_transaction = self.hs.get_federation_transport_client().send_transaction
+ mock_send_transaction = (
+ self.hs.get_federation_transport_client().send_transaction
+ )
mock_send_transaction.return_value = defer.succeed({})
sender = self.hs.get_federation_sender()
- receipt = ReadReceipt("room_id", "m.read", "user_id", ["event_id"], {"ts": 1234})
+ receipt = ReadReceipt(
+ "room_id", "m.read", "user_id", ["event_id"], {"ts": 1234}
+ )
self.successResultOf(sender.send_read_receipt(receipt))
self.pump()
@@ -81,25 +92,30 @@ class FederationSenderTestCases(HomeserverTestCase):
mock_send_transaction.assert_called_once()
json_cb = mock_send_transaction.call_args[0][1]
data = json_cb()
- self.assertEqual(data['edus'], [
- {
- 'edu_type': 'm.receipt',
- 'content': {
- 'room_id': {
- 'm.read': {
- 'user_id': {
- 'event_ids': ['event_id'],
- 'data': {'ts': 1234},
- },
- },
+ self.assertEqual(
+ data['edus'],
+ [
+ {
+ 'edu_type': 'm.receipt',
+ 'content': {
+ 'room_id': {
+ 'm.read': {
+ 'user_id': {
+ 'event_ids': ['event_id'],
+ 'data': {'ts': 1234},
+ }
+ }
+ }
},
- },
- },
- ])
+ }
+ ],
+ )
mock_send_transaction.reset_mock()
# send the second RR
- receipt = ReadReceipt("room_id", "m.read", "user_id", ["other_id"], {"ts": 1234})
+ receipt = ReadReceipt(
+ "room_id", "m.read", "user_id", ["other_id"], {"ts": 1234}
+ )
self.successResultOf(sender.send_read_receipt(receipt))
self.pump()
mock_send_transaction.assert_not_called()
@@ -111,18 +127,21 @@ class FederationSenderTestCases(HomeserverTestCase):
mock_send_transaction.assert_called_once()
json_cb = mock_send_transaction.call_args[0][1]
data = json_cb()
- self.assertEqual(data['edus'], [
- {
- 'edu_type': 'm.receipt',
- 'content': {
- 'room_id': {
- 'm.read': {
- 'user_id': {
- 'event_ids': ['other_id'],
- 'data': {'ts': 1234},
- },
- },
+ self.assertEqual(
+ data['edus'],
+ [
+ {
+ 'edu_type': 'm.receipt',
+ 'content': {
+ 'room_id': {
+ 'm.read': {
+ 'user_id': {
+ 'event_ids': ['other_id'],
+ 'data': {'ts': 1234},
+ }
+ }
+ }
},
- },
- },
- ])
+ }
+ ],
+ )
diff --git a/tests/handlers/test_directory.py b/tests/handlers/test_directory.py
index 5b2105bc76..917548bb31 100644
--- a/tests/handlers/test_directory.py
+++ b/tests/handlers/test_directory.py
@@ -115,11 +115,7 @@ class TestCreateAliasACL(unittest.HomeserverTestCase):
# We cheekily override the config to add custom alias creation rules
config = {}
config["alias_creation_rules"] = [
- {
- "user_id": "*",
- "alias": "#unofficial_*",
- "action": "allow",
- }
+ {"user_id": "*", "alias": "#unofficial_*", "action": "allow"}
]
config["room_list_publication_rules"] = []
@@ -162,9 +158,7 @@ class TestRoomListSearchDisabled(unittest.HomeserverTestCase):
room_id = self.helper.create_room_as(self.user_id)
request, channel = self.make_request(
- "PUT",
- b"directory/list/room/%s" % (room_id.encode('ascii'),),
- b'{}',
+ "PUT", b"directory/list/room/%s" % (room_id.encode('ascii'),), b'{}'
)
self.render(request)
self.assertEquals(200, channel.code, channel.result)
@@ -179,10 +173,7 @@ class TestRoomListSearchDisabled(unittest.HomeserverTestCase):
self.directory_handler.enable_room_list_search = True
# Room list is enabled so we should get some results
- request, channel = self.make_request(
- "GET",
- b"publicRooms",
- )
+ request, channel = self.make_request("GET", b"publicRooms")
self.render(request)
self.assertEquals(200, channel.code, channel.result)
self.assertTrue(len(channel.json_body["chunk"]) > 0)
@@ -191,10 +182,7 @@ class TestRoomListSearchDisabled(unittest.HomeserverTestCase):
self.directory_handler.enable_room_list_search = False
# Room list disabled so we should get no results
- request, channel = self.make_request(
- "GET",
- b"publicRooms",
- )
+ request, channel = self.make_request("GET", b"publicRooms")
self.render(request)
self.assertEquals(200, channel.code, channel.result)
self.assertTrue(len(channel.json_body["chunk"]) == 0)
@@ -202,9 +190,7 @@ class TestRoomListSearchDisabled(unittest.HomeserverTestCase):
# Room list disabled so we shouldn't be allowed to publish rooms
room_id = self.helper.create_room_as(self.user_id)
request, channel = self.make_request(
- "PUT",
- b"directory/list/room/%s" % (room_id.encode('ascii'),),
- b'{}',
+ "PUT", b"directory/list/room/%s" % (room_id.encode('ascii'),), b'{}'
)
self.render(request)
self.assertEquals(403, channel.code, channel.result)
diff --git a/tests/handlers/test_e2e_room_keys.py b/tests/handlers/test_e2e_room_keys.py
index 1c49bbbc3c..2e72a1dd23 100644
--- a/tests/handlers/test_e2e_room_keys.py
+++ b/tests/handlers/test_e2e_room_keys.py
@@ -36,7 +36,7 @@ room_keys = {
"first_message_index": 1,
"forwarded_count": 1,
"is_verified": False,
- "session_data": "SSBBTSBBIEZJU0gK"
+ "session_data": "SSBBTSBBIEZJU0gK",
}
}
}
@@ -47,15 +47,13 @@ room_keys = {
class E2eRoomKeysHandlerTestCase(unittest.TestCase):
def __init__(self, *args, **kwargs):
super(E2eRoomKeysHandlerTestCase, self).__init__(*args, **kwargs)
- self.hs = None # type: synapse.server.HomeServer
+ self.hs = None # type: synapse.server.HomeServer
self.handler = None # type: synapse.handlers.e2e_keys.E2eRoomKeysHandler
@defer.inlineCallbacks
def setUp(self):
self.hs = yield utils.setup_test_homeserver(
- self.addCleanup,
- handlers=None,
- replication_layer=mock.Mock(),
+ self.addCleanup, handlers=None, replication_layer=mock.Mock()
)
self.handler = synapse.handlers.e2e_room_keys.E2eRoomKeysHandler(self.hs)
self.local_user = "@boris:" + self.hs.hostname
@@ -88,67 +86,86 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
def test_create_version(self):
"""Check that we can create and then retrieve versions.
"""
- res = yield self.handler.create_version(self.local_user, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "first_version_auth_data",
- })
+ res = yield self.handler.create_version(
+ self.local_user,
+ {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
+ )
self.assertEqual(res, "1")
# check we can retrieve it as the current version
res = yield self.handler.get_version_info(self.local_user)
- self.assertDictEqual(res, {
- "version": "1",
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "first_version_auth_data",
- })
+ self.assertDictEqual(
+ res,
+ {
+ "version": "1",
+ "algorithm": "m.megolm_backup.v1",
+ "auth_data": "first_version_auth_data",
+ },
+ )
# check we can retrieve it as a specific version
res = yield self.handler.get_version_info(self.local_user, "1")
- self.assertDictEqual(res, {
- "version": "1",
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "first_version_auth_data",
- })
+ self.assertDictEqual(
+ res,
+ {
+ "version": "1",
+ "algorithm": "m.megolm_backup.v1",
+ "auth_data": "first_version_auth_data",
+ },
+ )
# upload a new one...
- res = yield self.handler.create_version(self.local_user, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "second_version_auth_data",
- })
+ res = yield self.handler.create_version(
+ self.local_user,
+ {
+ "algorithm": "m.megolm_backup.v1",
+ "auth_data": "second_version_auth_data",
+ },
+ )
self.assertEqual(res, "2")
# check we can retrieve it as the current version
res = yield self.handler.get_version_info(self.local_user)
- self.assertDictEqual(res, {
- "version": "2",
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "second_version_auth_data",
- })
+ self.assertDictEqual(
+ res,
+ {
+ "version": "2",
+ "algorithm": "m.megolm_backup.v1",
+ "auth_data": "second_version_auth_data",
+ },
+ )
@defer.inlineCallbacks
def test_update_version(self):
"""Check that we can update versions.
"""
- version = yield self.handler.create_version(self.local_user, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "first_version_auth_data",
- })
+ version = yield self.handler.create_version(
+ self.local_user,
+ {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
+ )
self.assertEqual(version, "1")
- res = yield self.handler.update_version(self.local_user, version, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "revised_first_version_auth_data",
- "version": version
- })
+ res = yield self.handler.update_version(
+ self.local_user,
+ version,
+ {
+ "algorithm": "m.megolm_backup.v1",
+ "auth_data": "revised_first_version_auth_data",
+ "version": version,
+ },
+ )
self.assertDictEqual(res, {})
# check we can retrieve it as the current version
res = yield self.handler.get_version_info(self.local_user)
- self.assertDictEqual(res, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "revised_first_version_auth_data",
- "version": version
- })
+ self.assertDictEqual(
+ res,
+ {
+ "algorithm": "m.megolm_backup.v1",
+ "auth_data": "revised_first_version_auth_data",
+ "version": version,
+ },
+ )
@defer.inlineCallbacks
def test_update_missing_version(self):
@@ -156,11 +173,15 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
"""
res = None
try:
- yield self.handler.update_version(self.local_user, "1", {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "revised_first_version_auth_data",
- "version": "1"
- })
+ yield self.handler.update_version(
+ self.local_user,
+ "1",
+ {
+ "algorithm": "m.megolm_backup.v1",
+ "auth_data": "revised_first_version_auth_data",
+ "version": "1",
+ },
+ )
except errors.SynapseError as e:
res = e.code
self.assertEqual(res, 404)
@@ -170,29 +191,37 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
"""Check that we get a 400 if the version in the body is missing or
doesn't match
"""
- version = yield self.handler.create_version(self.local_user, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "first_version_auth_data",
- })
+ version = yield self.handler.create_version(
+ self.local_user,
+ {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
+ )
self.assertEqual(version, "1")
res = None
try:
- yield self.handler.update_version(self.local_user, version, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "revised_first_version_auth_data"
- })
+ yield self.handler.update_version(
+ self.local_user,
+ version,
+ {
+ "algorithm": "m.megolm_backup.v1",
+ "auth_data": "revised_first_version_auth_data",
+ },
+ )
except errors.SynapseError as e:
res = e.code
self.assertEqual(res, 400)
res = None
try:
- yield self.handler.update_version(self.local_user, version, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "revised_first_version_auth_data",
- "version": "incorrect"
- })
+ yield self.handler.update_version(
+ self.local_user,
+ version,
+ {
+ "algorithm": "m.megolm_backup.v1",
+ "auth_data": "revised_first_version_auth_data",
+ "version": "incorrect",
+ },
+ )
except errors.SynapseError as e:
res = e.code
self.assertEqual(res, 400)
@@ -223,10 +252,10 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
def test_delete_version(self):
"""Check that we can create and then delete versions.
"""
- res = yield self.handler.create_version(self.local_user, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "first_version_auth_data",
- })
+ res = yield self.handler.create_version(
+ self.local_user,
+ {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
+ )
self.assertEqual(res, "1")
# check we can delete it
@@ -255,16 +284,14 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
def test_get_missing_room_keys(self):
"""Check we get an empty response from an empty backup
"""
- version = yield self.handler.create_version(self.local_user, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "first_version_auth_data",
- })
+ version = yield self.handler.create_version(
+ self.local_user,
+ {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
+ )
self.assertEqual(version, "1")
res = yield self.handler.get_room_keys(self.local_user, version)
- self.assertDictEqual(res, {
- "rooms": {}
- })
+ self.assertDictEqual(res, {"rooms": {}})
# TODO: test the locking semantics when uploading room_keys,
# although this is probably best done in sytest
@@ -275,7 +302,9 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
"""
res = None
try:
- yield self.handler.upload_room_keys(self.local_user, "no_version", room_keys)
+ yield self.handler.upload_room_keys(
+ self.local_user, "no_version", room_keys
+ )
except errors.SynapseError as e:
res = e.code
self.assertEqual(res, 404)
@@ -285,10 +314,10 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
"""Check that we get a 404 on uploading keys when an nonexistent version
is specified
"""
- version = yield self.handler.create_version(self.local_user, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "first_version_auth_data",
- })
+ version = yield self.handler.create_version(
+ self.local_user,
+ {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
+ )
self.assertEqual(version, "1")
res = None
@@ -304,16 +333,19 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
def test_upload_room_keys_wrong_version(self):
"""Check that we get a 403 on uploading keys for an old version
"""
- version = yield self.handler.create_version(self.local_user, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "first_version_auth_data",
- })
+ version = yield self.handler.create_version(
+ self.local_user,
+ {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
+ )
self.assertEqual(version, "1")
- version = yield self.handler.create_version(self.local_user, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "second_version_auth_data",
- })
+ version = yield self.handler.create_version(
+ self.local_user,
+ {
+ "algorithm": "m.megolm_backup.v1",
+ "auth_data": "second_version_auth_data",
+ },
+ )
self.assertEqual(version, "2")
res = None
@@ -327,10 +359,10 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
def test_upload_room_keys_insert(self):
"""Check that we can insert and retrieve keys for a session
"""
- version = yield self.handler.create_version(self.local_user, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "first_version_auth_data",
- })
+ version = yield self.handler.create_version(
+ self.local_user,
+ {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
+ )
self.assertEqual(version, "1")
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
@@ -340,18 +372,13 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
# check getting room_keys for a given room
res = yield self.handler.get_room_keys(
- self.local_user,
- version,
- room_id="!abc:matrix.org"
+ self.local_user, version, room_id="!abc:matrix.org"
)
self.assertDictEqual(res, room_keys)
# check getting room_keys for a given session_id
res = yield self.handler.get_room_keys(
- self.local_user,
- version,
- room_id="!abc:matrix.org",
- session_id="c0ff33",
+ self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
)
self.assertDictEqual(res, room_keys)
@@ -359,10 +386,10 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
def test_upload_room_keys_merge(self):
"""Check that we can upload a new room_key for an existing session and
have it correctly merged"""
- version = yield self.handler.create_version(self.local_user, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "first_version_auth_data",
- })
+ version = yield self.handler.create_version(
+ self.local_user,
+ {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
+ )
self.assertEqual(version, "1")
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
@@ -378,7 +405,7 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
res = yield self.handler.get_room_keys(self.local_user, version)
self.assertEqual(
res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'],
- "SSBBTSBBIEZJU0gK"
+ "SSBBTSBBIEZJU0gK",
)
# test that marking the session as verified however /does/ replace it
@@ -387,8 +414,7 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
res = yield self.handler.get_room_keys(self.local_user, version)
self.assertEqual(
- res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'],
- "new"
+ res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'], "new"
)
# test that a session with a higher forwarded_count doesn't replace one
@@ -399,8 +425,7 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
res = yield self.handler.get_room_keys(self.local_user, version)
self.assertEqual(
- res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'],
- "new"
+ res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'], "new"
)
# TODO: check edge cases as well as the common variations here
@@ -409,56 +434,36 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
def test_delete_room_keys(self):
"""Check that we can insert and delete keys for a session
"""
- version = yield self.handler.create_version(self.local_user, {
- "algorithm": "m.megolm_backup.v1",
- "auth_data": "first_version_auth_data",
- })
+ version = yield self.handler.create_version(
+ self.local_user,
+ {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
+ )
self.assertEqual(version, "1")
# check for bulk-delete
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
yield self.handler.delete_room_keys(self.local_user, version)
res = yield self.handler.get_room_keys(
- self.local_user,
- version,
- room_id="!abc:matrix.org",
- session_id="c0ff33",
+ self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
)
- self.assertDictEqual(res, {
- "rooms": {}
- })
+ self.assertDictEqual(res, {"rooms": {}})
# check for bulk-delete per room
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
yield self.handler.delete_room_keys(
- self.local_user,
- version,
- room_id="!abc:matrix.org",
+ self.local_user, version, room_id="!abc:matrix.org"
)
res = yield self.handler.get_room_keys(
- self.local_user,
- version,
- room_id="!abc:matrix.org",
- session_id="c0ff33",
+ self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
)
- self.assertDictEqual(res, {
- "rooms": {}
- })
+ self.assertDictEqual(res, {"rooms": {}})
# check for bulk-delete per session
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
yield self.handler.delete_room_keys(
- self.local_user,
- version,
- room_id="!abc:matrix.org",
- session_id="c0ff33",
+ self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
)
res = yield self.handler.get_room_keys(
- self.local_user,
- version,
- room_id="!abc:matrix.org",
- session_id="c0ff33",
+ self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
)
- self.assertDictEqual(res, {
- "rooms": {}
- })
+ self.assertDictEqual(res, {"rooms": {}})
diff --git a/tests/handlers/test_presence.py b/tests/handlers/test_presence.py
index 94c6080e34..f70c6e7d65 100644
--- a/tests/handlers/test_presence.py
+++ b/tests/handlers/test_presence.py
@@ -424,8 +424,7 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
hs = self.setup_test_homeserver(
- "server", http_client=None,
- federation_sender=Mock(),
+ "server", http_client=None, federation_sender=Mock()
)
return hs
@@ -457,7 +456,7 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
# Mark test2 as online, test will be offline with a last_active of 0
self.presence_handler.set_state(
- UserID.from_string("@test2:server"), {"presence": PresenceState.ONLINE},
+ UserID.from_string("@test2:server"), {"presence": PresenceState.ONLINE}
)
self.reactor.pump([0]) # Wait for presence updates to be handled
@@ -506,13 +505,13 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
# Mark test as online
self.presence_handler.set_state(
- UserID.from_string("@test:server"), {"presence": PresenceState.ONLINE},
+ UserID.from_string("@test:server"), {"presence": PresenceState.ONLINE}
)
# Mark test2 as online, test will be offline with a last_active of 0.
# Note we don't join them to the room yet
self.presence_handler.set_state(
- UserID.from_string("@test2:server"), {"presence": PresenceState.ONLINE},
+ UserID.from_string("@test2:server"), {"presence": PresenceState.ONLINE}
)
# Add servers to the room
@@ -541,8 +540,7 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
)
self.assertEqual(expected_state.state, PresenceState.ONLINE)
self.federation_sender.send_presence_to_destinations.assert_called_once_with(
- destinations=set(("server2", "server3")),
- states=[expected_state]
+ destinations=set(("server2", "server3")), states=[expected_state]
)
def _add_new_user(self, room_id, user_id):
@@ -565,7 +563,7 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
type=EventTypes.Member,
sender=user_id,
state_key=user_id,
- content={"membership": Membership.JOIN}
+ content={"membership": Membership.JOIN},
)
prev_event_ids = self.get_success(
diff --git a/tests/handlers/test_register.py b/tests/handlers/test_register.py
index 017ea0385e..1c253d0579 100644
--- a/tests/handlers/test_register.py
+++ b/tests/handlers/test_register.py
@@ -37,8 +37,12 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
hs_config = self.default_config("test")
# some of the tests rely on us having a user consent version
- hs_config.user_consent_version = "test_consent_version"
- hs_config.max_mau_value = 50
+ hs_config["user_consent"] = {
+ "version": "test_consent_version",
+ "template_dir": ".",
+ }
+ hs_config["max_mau_value"] = 50
+ hs_config["limit_usage_by_mau"] = True
hs = self.setup_test_homeserver(config=hs_config, expire_access_token=True)
return hs
diff --git a/tests/handlers/test_typing.py b/tests/handlers/test_typing.py
index 5a0b6c201c..cb8b4d2913 100644
--- a/tests/handlers/test_typing.py
+++ b/tests/handlers/test_typing.py
@@ -64,20 +64,22 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
mock_federation_client.put_json.return_value = defer.succeed((200, "OK"))
hs = self.setup_test_homeserver(
- datastore=(Mock(
- spec=[
- # Bits that Federation needs
- "prep_send_transaction",
- "delivered_txn",
- "get_received_txn_response",
- "set_received_txn_response",
- "get_destination_retry_timings",
- "get_devices_by_remote",
- # Bits that user_directory needs
- "get_user_directory_stream_pos",
- "get_current_state_deltas",
- ]
- )),
+ datastore=(
+ Mock(
+ spec=[
+ # Bits that Federation needs
+ "prep_send_transaction",
+ "delivered_txn",
+ "get_received_txn_response",
+ "set_received_txn_response",
+ "get_destination_retry_timings",
+ "get_devices_by_remote",
+ # Bits that user_directory needs
+ "get_user_directory_stream_pos",
+ "get_current_state_deltas",
+ ]
+ )
+ ),
notifier=Mock(),
http_client=mock_federation_client,
keyring=mock_keyring,
@@ -87,7 +89,7 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
def prepare(self, reactor, clock, hs):
# the tests assume that we are starting at unix time 1000
- reactor.pump((1000, ))
+ reactor.pump((1000,))
mock_notifier = hs.get_notifier()
self.on_new_event = mock_notifier.on_new_event
@@ -114,6 +116,7 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
def check_joined_room(room_id, user_id):
if user_id not in [u.to_string() for u in self.room_members]:
raise AuthError(401, "User is not in the room")
+
hs.get_auth().check_joined_room = check_joined_room
def get_joined_hosts_for_room(room_id):
@@ -123,6 +126,7 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
def get_current_users_in_room(room_id):
return set(str(u) for u in self.room_members)
+
hs.get_state_handler().get_current_users_in_room = get_current_users_in_room
self.datastore.get_user_directory_stream_pos.return_value = (
@@ -141,21 +145,16 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
self.assertEquals(self.event_source.get_current_key(), 0)
- self.successResultOf(self.handler.started_typing(
- target_user=U_APPLE,
- auth_user=U_APPLE,
- room_id=ROOM_ID,
- timeout=20000,
- ))
-
- self.on_new_event.assert_has_calls(
- [call('typing_key', 1, rooms=[ROOM_ID])]
+ self.successResultOf(
+ self.handler.started_typing(
+ target_user=U_APPLE, auth_user=U_APPLE, room_id=ROOM_ID, timeout=20000
+ )
)
+ self.on_new_event.assert_has_calls([call('typing_key', 1, rooms=[ROOM_ID])])
+
self.assertEquals(self.event_source.get_current_key(), 1)
- events = self.event_source.get_new_events(
- room_ids=[ROOM_ID], from_key=0
- )
+ events = self.event_source.get_new_events(room_ids=[ROOM_ID], from_key=0)
self.assertEquals(
events[0],
[
@@ -170,12 +169,11 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
def test_started_typing_remote_send(self):
self.room_members = [U_APPLE, U_ONION]
- self.successResultOf(self.handler.started_typing(
- target_user=U_APPLE,
- auth_user=U_APPLE,
- room_id=ROOM_ID,
- timeout=20000,
- ))
+ self.successResultOf(
+ self.handler.started_typing(
+ target_user=U_APPLE, auth_user=U_APPLE, room_id=ROOM_ID, timeout=20000
+ )
+ )
put_json = self.hs.get_http_client().put_json
put_json.assert_called_once_with(
@@ -216,14 +214,10 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
self.render(request)
self.assertEqual(channel.code, 200)
- self.on_new_event.assert_has_calls(
- [call('typing_key', 1, rooms=[ROOM_ID])]
- )
+ self.on_new_event.assert_has_calls([call('typing_key', 1, rooms=[ROOM_ID])])
self.assertEquals(self.event_source.get_current_key(), 1)
- events = self.event_source.get_new_events(
- room_ids=[ROOM_ID], from_key=0
- )
+ events = self.event_source.get_new_events(room_ids=[ROOM_ID], from_key=0)
self.assertEquals(
events[0],
[
@@ -247,14 +241,14 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
self.assertEquals(self.event_source.get_current_key(), 0)
- self.successResultOf(self.handler.stopped_typing(
- target_user=U_APPLE, auth_user=U_APPLE, room_id=ROOM_ID
- ))
-
- self.on_new_event.assert_has_calls(
- [call('typing_key', 1, rooms=[ROOM_ID])]
+ self.successResultOf(
+ self.handler.stopped_typing(
+ target_user=U_APPLE, auth_user=U_APPLE, room_id=ROOM_ID
+ )
)
+ self.on_new_event.assert_has_calls([call('typing_key', 1, rooms=[ROOM_ID])])
+
put_json = self.hs.get_http_client().put_json
put_json.assert_called_once_with(
"farm",
@@ -274,18 +268,10 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
)
self.assertEquals(self.event_source.get_current_key(), 1)
- events = self.event_source.get_new_events(
- room_ids=[ROOM_ID], from_key=0
- )
+ events = self.event_source.get_new_events(room_ids=[ROOM_ID], from_key=0)
self.assertEquals(
events[0],
- [
- {
- "type": "m.typing",
- "room_id": ROOM_ID,
- "content": {"user_ids": []},
- }
- ],
+ [{"type": "m.typing", "room_id": ROOM_ID, "content": {"user_ids": []}}],
)
def test_typing_timeout(self):
@@ -293,22 +279,17 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
self.assertEquals(self.event_source.get_current_key(), 0)
- self.successResultOf(self.handler.started_typing(
- target_user=U_APPLE,
- auth_user=U_APPLE,
- room_id=ROOM_ID,
- timeout=10000,
- ))
-
- self.on_new_event.assert_has_calls(
- [call('typing_key', 1, rooms=[ROOM_ID])]
+ self.successResultOf(
+ self.handler.started_typing(
+ target_user=U_APPLE, auth_user=U_APPLE, room_id=ROOM_ID, timeout=10000
+ )
)
+
+ self.on_new_event.assert_has_calls([call('typing_key', 1, rooms=[ROOM_ID])])
self.on_new_event.reset_mock()
self.assertEquals(self.event_source.get_current_key(), 1)
- events = self.event_source.get_new_events(
- room_ids=[ROOM_ID], from_key=0
- )
+ events = self.event_source.get_new_events(room_ids=[ROOM_ID], from_key=0)
self.assertEquals(
events[0],
[
@@ -320,45 +301,30 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
],
)
- self.reactor.pump([16, ])
+ self.reactor.pump([16])
- self.on_new_event.assert_has_calls(
- [call('typing_key', 2, rooms=[ROOM_ID])]
- )
+ self.on_new_event.assert_has_calls([call('typing_key', 2, rooms=[ROOM_ID])])
self.assertEquals(self.event_source.get_current_key(), 2)
- events = self.event_source.get_new_events(
- room_ids=[ROOM_ID], from_key=1
- )
+ events = self.event_source.get_new_events(room_ids=[ROOM_ID], from_key=1)
self.assertEquals(
events[0],
- [
- {
- "type": "m.typing",
- "room_id": ROOM_ID,
- "content": {"user_ids": []},
- }
- ],
+ [{"type": "m.typing", "room_id": ROOM_ID, "content": {"user_ids": []}}],
)
# SYN-230 - see if we can still set after timeout
- self.successResultOf(self.handler.started_typing(
- target_user=U_APPLE,
- auth_user=U_APPLE,
- room_id=ROOM_ID,
- timeout=10000,
- ))
-
- self.on_new_event.assert_has_calls(
- [call('typing_key', 3, rooms=[ROOM_ID])]
+ self.successResultOf(
+ self.handler.started_typing(
+ target_user=U_APPLE, auth_user=U_APPLE, room_id=ROOM_ID, timeout=10000
+ )
)
+
+ self.on_new_event.assert_has_calls([call('typing_key', 3, rooms=[ROOM_ID])])
self.on_new_event.reset_mock()
self.assertEquals(self.event_source.get_current_key(), 3)
- events = self.event_source.get_new_events(
- room_ids=[ROOM_ID], from_key=0
- )
+ events = self.event_source.get_new_events(room_ids=[ROOM_ID], from_key=0)
self.assertEquals(
events[0],
[
diff --git a/tests/handlers/test_user_directory.py b/tests/handlers/test_user_directory.py
index 7dd1a1daf8..9021e647fe 100644
--- a/tests/handlers/test_user_directory.py
+++ b/tests/handlers/test_user_directory.py
@@ -37,7 +37,7 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
- config.update_user_directory = True
+ config["update_user_directory"] = True
return self.setup_test_homeserver(config=config)
def prepare(self, reactor, clock, hs):
@@ -333,7 +333,7 @@ class TestUserDirSearchDisabled(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
- config.update_user_directory = True
+ config["update_user_directory"] = True
hs = self.setup_test_homeserver(config=config)
self.config = hs.config
@@ -352,9 +352,7 @@ class TestUserDirSearchDisabled(unittest.HomeserverTestCase):
# Assert user directory is not empty
request, channel = self.make_request(
- "POST",
- b"user_directory/search",
- b'{"search_term":"user2"}',
+ "POST", b"user_directory/search", b'{"search_term":"user2"}'
)
self.render(request)
self.assertEquals(200, channel.code, channel.result)
@@ -363,9 +361,7 @@ class TestUserDirSearchDisabled(unittest.HomeserverTestCase):
# Disable user directory and check search returns nothing
self.config.user_directory_search_enabled = False
request, channel = self.make_request(
- "POST",
- b"user_directory/search",
- b'{"search_term":"user2"}',
+ "POST", b"user_directory/search", b'{"search_term":"user2"}'
)
self.render(request)
self.assertEquals(200, channel.code, channel.result)
diff --git a/tests/http/__init__.py b/tests/http/__init__.py
index ee8010f598..851fc0eb33 100644
--- a/tests/http/__init__.py
+++ b/tests/http/__init__.py
@@ -24,14 +24,12 @@ def get_test_cert_file():
#
# openssl req -x509 -newkey rsa:4096 -keyout server.pem -out server.pem -days 36500 \
# -nodes -subj '/CN=testserv'
- return os.path.join(
- os.path.dirname(__file__),
- 'server.pem',
- )
+ return os.path.join(os.path.dirname(__file__), 'server.pem')
class ServerTLSContext(object):
"""A TLS Context which presents our test cert."""
+
def __init__(self):
self.filename = get_test_cert_file()
diff --git a/tests/http/federation/test_matrix_federation_agent.py b/tests/http/federation/test_matrix_federation_agent.py
index e9eb662c4c..ed0ca079d9 100644
--- a/tests/http/federation/test_matrix_federation_agent.py
+++ b/tests/http/federation/test_matrix_federation_agent.py
@@ -54,7 +54,9 @@ class MatrixFederationAgentTests(TestCase):
self.agent = MatrixFederationAgent(
reactor=self.reactor,
- tls_client_options_factory=ClientTLSOptionsFactory(default_config("test")),
+ tls_client_options_factory=ClientTLSOptionsFactory(
+ default_config("test", parse=True)
+ ),
_well_known_tls_policy=TrustingTLSPolicyForHTTPS(),
_srv_resolver=self.mock_resolver,
_well_known_cache=self.well_known_cache,
@@ -79,12 +81,12 @@ class MatrixFederationAgentTests(TestCase):
# stubbing that out here.
client_protocol = client_factory.buildProtocol(None)
client_protocol.makeConnection(
- FakeTransport(server_tls_protocol, self.reactor, client_protocol),
+ FakeTransport(server_tls_protocol, self.reactor, client_protocol)
)
# tell the server tls protocol to send its stuff back to the client, too
server_tls_protocol.makeConnection(
- FakeTransport(client_protocol, self.reactor, server_tls_protocol),
+ FakeTransport(client_protocol, self.reactor, server_tls_protocol)
)
# give the reactor a pump to get the TLS juices flowing.
@@ -125,7 +127,7 @@ class MatrixFederationAgentTests(TestCase):
_check_logcontext(context)
def _handle_well_known_connection(
- self, client_factory, expected_sni, content, response_headers={},
+ self, client_factory, expected_sni, content, response_headers={}
):
"""Handle an outgoing HTTPs connection: wire it up to a server, check that the
request is for a .well-known, and send the response.
@@ -139,8 +141,7 @@ class MatrixFederationAgentTests(TestCase):
"""
# make the connection for .well-known
well_known_server = self._make_connection(
- client_factory,
- expected_sni=expected_sni,
+ client_factory, expected_sni=expected_sni
)
# check the .well-known request and send a response
self.assertEqual(len(well_known_server.requests), 1)
@@ -154,17 +155,14 @@ class MatrixFederationAgentTests(TestCase):
"""
self.assertEqual(request.method, b'GET')
self.assertEqual(request.path, b'/.well-known/matrix/server')
- self.assertEqual(
- request.requestHeaders.getRawHeaders(b'host'),
- [b'testserv'],
- )
+ self.assertEqual(request.requestHeaders.getRawHeaders(b'host'), [b'testserv'])
# send back a response
for k, v in headers.items():
request.setHeader(k, v)
request.write(content)
request.finish()
- self.reactor.pump((0.1, ))
+ self.reactor.pump((0.1,))
def test_get(self):
"""
@@ -184,18 +182,14 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(port, 8448)
# make a test server, and wire up the client
- http_server = self._make_connection(
- client_factory,
- expected_sni=b"testserv",
- )
+ http_server = self._make_connection(client_factory, expected_sni=b"testserv")
self.assertEqual(len(http_server.requests), 1)
request = http_server.requests[0]
self.assertEqual(request.method, b'GET')
self.assertEqual(request.path, b'/foo/bar')
self.assertEqual(
- request.requestHeaders.getRawHeaders(b'host'),
- [b'testserv:8448']
+ request.requestHeaders.getRawHeaders(b'host'), [b'testserv:8448']
)
content = request.content.read()
self.assertEqual(content, b'')
@@ -244,19 +238,13 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(port, 8448)
# make a test server, and wire up the client
- http_server = self._make_connection(
- client_factory,
- expected_sni=None,
- )
+ http_server = self._make_connection(client_factory, expected_sni=None)
self.assertEqual(len(http_server.requests), 1)
request = http_server.requests[0]
self.assertEqual(request.method, b'GET')
self.assertEqual(request.path, b'/foo/bar')
- self.assertEqual(
- request.requestHeaders.getRawHeaders(b'host'),
- [b'1.2.3.4'],
- )
+ self.assertEqual(request.requestHeaders.getRawHeaders(b'host'), [b'1.2.3.4'])
# finish the request
request.finish()
@@ -285,19 +273,13 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(port, 8448)
# make a test server, and wire up the client
- http_server = self._make_connection(
- client_factory,
- expected_sni=None,
- )
+ http_server = self._make_connection(client_factory, expected_sni=None)
self.assertEqual(len(http_server.requests), 1)
request = http_server.requests[0]
self.assertEqual(request.method, b'GET')
self.assertEqual(request.path, b'/foo/bar')
- self.assertEqual(
- request.requestHeaders.getRawHeaders(b'host'),
- [b'[::1]'],
- )
+ self.assertEqual(request.requestHeaders.getRawHeaders(b'host'), [b'[::1]'])
# finish the request
request.finish()
@@ -326,19 +308,13 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(port, 80)
# make a test server, and wire up the client
- http_server = self._make_connection(
- client_factory,
- expected_sni=None,
- )
+ http_server = self._make_connection(client_factory, expected_sni=None)
self.assertEqual(len(http_server.requests), 1)
request = http_server.requests[0]
self.assertEqual(request.method, b'GET')
self.assertEqual(request.path, b'/foo/bar')
- self.assertEqual(
- request.requestHeaders.getRawHeaders(b'host'),
- [b'[::1]:80'],
- )
+ self.assertEqual(request.requestHeaders.getRawHeaders(b'host'), [b'[::1]:80'])
# finish the request
request.finish()
@@ -377,7 +353,7 @@ class MatrixFederationAgentTests(TestCase):
# now there should be a SRV lookup
self.mock_resolver.resolve_service.assert_called_once_with(
- b"_matrix._tcp.testserv",
+ b"_matrix._tcp.testserv"
)
# we should fall back to a direct connection
@@ -387,19 +363,13 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(port, 8448)
# make a test server, and wire up the client
- http_server = self._make_connection(
- client_factory,
- expected_sni=b'testserv',
- )
+ http_server = self._make_connection(client_factory, expected_sni=b'testserv')
self.assertEqual(len(http_server.requests), 1)
request = http_server.requests[0]
self.assertEqual(request.method, b'GET')
self.assertEqual(request.path, b'/foo/bar')
- self.assertEqual(
- request.requestHeaders.getRawHeaders(b'host'),
- [b'testserv'],
- )
+ self.assertEqual(request.requestHeaders.getRawHeaders(b'host'), [b'testserv'])
# finish the request
request.finish()
@@ -427,13 +397,14 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(port, 443)
self._handle_well_known_connection(
- client_factory, expected_sni=b"testserv",
+ client_factory,
+ expected_sni=b"testserv",
content=b'{ "m.server": "target-server" }',
)
# there should be a SRV lookup
self.mock_resolver.resolve_service.assert_called_once_with(
- b"_matrix._tcp.target-server",
+ b"_matrix._tcp.target-server"
)
# now we should get a connection to the target server
@@ -444,8 +415,7 @@ class MatrixFederationAgentTests(TestCase):
# make a test server, and wire up the client
http_server = self._make_connection(
- client_factory,
- expected_sni=b'target-server',
+ client_factory, expected_sni=b'target-server'
)
self.assertEqual(len(http_server.requests), 1)
@@ -453,8 +423,7 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(request.method, b'GET')
self.assertEqual(request.path, b'/foo/bar')
self.assertEqual(
- request.requestHeaders.getRawHeaders(b'host'),
- [b'target-server'],
+ request.requestHeaders.getRawHeaders(b'host'), [b'target-server']
)
# finish the request
@@ -490,8 +459,7 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(port, 443)
redirect_server = self._make_connection(
- client_factory,
- expected_sni=b"testserv",
+ client_factory, expected_sni=b"testserv"
)
# send a 302 redirect
@@ -500,7 +468,7 @@ class MatrixFederationAgentTests(TestCase):
request.redirect(b'https://testserv/even_better_known')
request.finish()
- self.reactor.pump((0.1, ))
+ self.reactor.pump((0.1,))
# now there should be another connection
clients = self.reactor.tcpClients
@@ -510,8 +478,7 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(port, 443)
well_known_server = self._make_connection(
- client_factory,
- expected_sni=b"testserv",
+ client_factory, expected_sni=b"testserv"
)
self.assertEqual(len(well_known_server.requests), 1, "No request after 302")
@@ -521,11 +488,11 @@ class MatrixFederationAgentTests(TestCase):
request.write(b'{ "m.server": "target-server" }')
request.finish()
- self.reactor.pump((0.1, ))
+ self.reactor.pump((0.1,))
# there should be a SRV lookup
self.mock_resolver.resolve_service.assert_called_once_with(
- b"_matrix._tcp.target-server",
+ b"_matrix._tcp.target-server"
)
# now we should get a connection to the target server
@@ -536,8 +503,7 @@ class MatrixFederationAgentTests(TestCase):
# make a test server, and wire up the client
http_server = self._make_connection(
- client_factory,
- expected_sni=b'target-server',
+ client_factory, expected_sni=b'target-server'
)
self.assertEqual(len(http_server.requests), 1)
@@ -545,8 +511,7 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(request.method, b'GET')
self.assertEqual(request.path, b'/foo/bar')
self.assertEqual(
- request.requestHeaders.getRawHeaders(b'host'),
- [b'target-server'],
+ request.requestHeaders.getRawHeaders(b'host'), [b'target-server']
)
# finish the request
@@ -585,12 +550,12 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(port, 443)
self._handle_well_known_connection(
- client_factory, expected_sni=b"testserv", content=b'NOT JSON',
+ client_factory, expected_sni=b"testserv", content=b'NOT JSON'
)
# now there should be a SRV lookup
self.mock_resolver.resolve_service.assert_called_once_with(
- b"_matrix._tcp.testserv",
+ b"_matrix._tcp.testserv"
)
# we should fall back to a direct connection
@@ -600,19 +565,13 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(port, 8448)
# make a test server, and wire up the client
- http_server = self._make_connection(
- client_factory,
- expected_sni=b'testserv',
- )
+ http_server = self._make_connection(client_factory, expected_sni=b'testserv')
self.assertEqual(len(http_server.requests), 1)
request = http_server.requests[0]
self.assertEqual(request.method, b'GET')
self.assertEqual(request.path, b'/foo/bar')
- self.assertEqual(
- request.requestHeaders.getRawHeaders(b'host'),
- [b'testserv'],
- )
+ self.assertEqual(request.requestHeaders.getRawHeaders(b'host'), [b'testserv'])
# finish the request
request.finish()
@@ -635,7 +594,7 @@ class MatrixFederationAgentTests(TestCase):
# the request for a .well-known will have failed with a DNS lookup error.
self.mock_resolver.resolve_service.assert_called_once_with(
- b"_matrix._tcp.testserv",
+ b"_matrix._tcp.testserv"
)
# Make sure treq is trying to connect
@@ -646,19 +605,13 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(port, 8443)
# make a test server, and wire up the client
- http_server = self._make_connection(
- client_factory,
- expected_sni=b'testserv',
- )
+ http_server = self._make_connection(client_factory, expected_sni=b'testserv')
self.assertEqual(len(http_server.requests), 1)
request = http_server.requests[0]
self.assertEqual(request.method, b'GET')
self.assertEqual(request.path, b'/foo/bar')
- self.assertEqual(
- request.requestHeaders.getRawHeaders(b'host'),
- [b'testserv'],
- )
+ self.assertEqual(request.requestHeaders.getRawHeaders(b'host'), [b'testserv'])
# finish the request
request.finish()
@@ -685,17 +638,18 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(port, 443)
self.mock_resolver.resolve_service.side_effect = lambda _: [
- Server(host=b"srvtarget", port=8443),
+ Server(host=b"srvtarget", port=8443)
]
self._handle_well_known_connection(
- client_factory, expected_sni=b"testserv",
+ client_factory,
+ expected_sni=b"testserv",
content=b'{ "m.server": "target-server" }',
)
# there should be a SRV lookup
self.mock_resolver.resolve_service.assert_called_once_with(
- b"_matrix._tcp.target-server",
+ b"_matrix._tcp.target-server"
)
# now we should get a connection to the target of the SRV record
@@ -706,8 +660,7 @@ class MatrixFederationAgentTests(TestCase):
# make a test server, and wire up the client
http_server = self._make_connection(
- client_factory,
- expected_sni=b'target-server',
+ client_factory, expected_sni=b'target-server'
)
self.assertEqual(len(http_server.requests), 1)
@@ -715,8 +668,7 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(request.method, b'GET')
self.assertEqual(request.path, b'/foo/bar')
self.assertEqual(
- request.requestHeaders.getRawHeaders(b'host'),
- [b'target-server'],
+ request.requestHeaders.getRawHeaders(b'host'), [b'target-server']
)
# finish the request
@@ -757,7 +709,7 @@ class MatrixFederationAgentTests(TestCase):
# now there should have been a SRV lookup
self.mock_resolver.resolve_service.assert_called_once_with(
- b"_matrix._tcp.xn--bcher-kva.com",
+ b"_matrix._tcp.xn--bcher-kva.com"
)
# We should fall back to port 8448
@@ -769,8 +721,7 @@ class MatrixFederationAgentTests(TestCase):
# make a test server, and wire up the client
http_server = self._make_connection(
- client_factory,
- expected_sni=b'xn--bcher-kva.com',
+ client_factory, expected_sni=b'xn--bcher-kva.com'
)
self.assertEqual(len(http_server.requests), 1)
@@ -778,8 +729,7 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(request.method, b'GET')
self.assertEqual(request.path, b'/foo/bar')
self.assertEqual(
- request.requestHeaders.getRawHeaders(b'host'),
- [b'xn--bcher-kva.com'],
+ request.requestHeaders.getRawHeaders(b'host'), [b'xn--bcher-kva.com']
)
# finish the request
@@ -801,7 +751,7 @@ class MatrixFederationAgentTests(TestCase):
self.assertNoResult(test_d)
self.mock_resolver.resolve_service.assert_called_once_with(
- b"_matrix._tcp.xn--bcher-kva.com",
+ b"_matrix._tcp.xn--bcher-kva.com"
)
# Make sure treq is trying to connect
@@ -813,8 +763,7 @@ class MatrixFederationAgentTests(TestCase):
# make a test server, and wire up the client
http_server = self._make_connection(
- client_factory,
- expected_sni=b'xn--bcher-kva.com',
+ client_factory, expected_sni=b'xn--bcher-kva.com'
)
self.assertEqual(len(http_server.requests), 1)
@@ -822,8 +771,7 @@ class MatrixFederationAgentTests(TestCase):
self.assertEqual(request.method, b'GET')
self.assertEqual(request.path, b'/foo/bar')
self.assertEqual(
- request.requestHeaders.getRawHeaders(b'host'),
- [b'xn--bcher-kva.com'],
+ request.requestHeaders.getRawHeaders(b'host'), [b'xn--bcher-kva.com']
)
# finish the request
@@ -897,67 +845,70 @@ class TestCachePeriodFromHeaders(TestCase):
# uppercase
self.assertEqual(
_cache_period_from_headers(
- Headers({b'Cache-Control': [b'foo, Max-Age = 100, bar']}),
- ), 100,
+ Headers({b'Cache-Control': [b'foo, Max-Age = 100, bar']})
+ ),
+ 100,
)
# missing value
- self.assertIsNone(_cache_period_from_headers(
- Headers({b'Cache-Control': [b'max-age=, bar']}),
- ))
+ self.assertIsNone(
+ _cache_period_from_headers(Headers({b'Cache-Control': [b'max-age=, bar']}))
+ )
# hackernews: bogus due to semicolon
- self.assertIsNone(_cache_period_from_headers(
- Headers({b'Cache-Control': [b'private; max-age=0']}),
- ))
+ self.assertIsNone(
+ _cache_period_from_headers(
+ Headers({b'Cache-Control': [b'private; max-age=0']})
+ )
+ )
# github
self.assertEqual(
_cache_period_from_headers(
- Headers({b'Cache-Control': [b'max-age=0, private, must-revalidate']}),
- ), 0,
+ Headers({b'Cache-Control': [b'max-age=0, private, must-revalidate']})
+ ),
+ 0,
)
# google
self.assertEqual(
_cache_period_from_headers(
- Headers({b'cache-control': [b'private, max-age=0']}),
- ), 0,
+ Headers({b'cache-control': [b'private, max-age=0']})
+ ),
+ 0,
)
def test_expires(self):
self.assertEqual(
_cache_period_from_headers(
Headers({b'Expires': [b'Wed, 30 Jan 2019 07:35:33 GMT']}),
- time_now=lambda: 1548833700
- ), 33,
+ time_now=lambda: 1548833700,
+ ),
+ 33,
)
# cache-control overrides expires
self.assertEqual(
_cache_period_from_headers(
- Headers({
- b'cache-control': [b'max-age=10'],
- b'Expires': [b'Wed, 30 Jan 2019 07:35:33 GMT']
- }),
- time_now=lambda: 1548833700
- ), 10,
+ Headers(
+ {
+ b'cache-control': [b'max-age=10'],
+ b'Expires': [b'Wed, 30 Jan 2019 07:35:33 GMT'],
+ }
+ ),
+ time_now=lambda: 1548833700,
+ ),
+ 10,
)
# invalid expires means immediate expiry
- self.assertEqual(
- _cache_period_from_headers(
- Headers({b'Expires': [b'0']}),
- ), 0,
- )
+ self.assertEqual(_cache_period_from_headers(Headers({b'Expires': [b'0']})), 0)
def _check_logcontext(context):
current = LoggingContext.current_context()
if current is not context:
- raise AssertionError(
- "Expected logcontext %s but was %s" % (context, current),
- )
+ raise AssertionError("Expected logcontext %s but was %s" % (context, current))
def _build_test_server():
@@ -973,7 +924,7 @@ def _build_test_server():
server_factory.log = _log_request
server_tls_factory = TLSMemoryBIOFactory(
- ServerTLSContext(), isClient=False, wrappedFactory=server_factory,
+ ServerTLSContext(), isClient=False, wrappedFactory=server_factory
)
return server_tls_factory.buildProtocol(None)
@@ -987,6 +938,7 @@ def _log_request(request):
@implementer(IPolicyForHTTPS)
class TrustingTLSPolicyForHTTPS(object):
"""An IPolicyForHTTPS which doesn't do any certificate verification"""
+
def creatorForNetloc(self, hostname, port):
certificateOptions = OpenSSLCertificateOptions()
return ClientTLSOptions(hostname, certificateOptions.getContext())
diff --git a/tests/http/federation/test_srv_resolver.py b/tests/http/federation/test_srv_resolver.py
index a872e2441e..034c0db8d2 100644
--- a/tests/http/federation/test_srv_resolver.py
+++ b/tests/http/federation/test_srv_resolver.py
@@ -68,9 +68,7 @@ class SrvResolverTestCase(unittest.TestCase):
dns_client_mock.lookupService.assert_called_once_with(service_name)
- result_deferred.callback(
- ([answer_srv], None, None)
- )
+ result_deferred.callback(([answer_srv], None, None))
servers = self.successResultOf(test_d)
@@ -112,7 +110,7 @@ class SrvResolverTestCase(unittest.TestCase):
cache = {service_name: [entry]}
resolver = SrvResolver(
- dns_client=dns_client_mock, cache=cache, get_time=clock.time,
+ dns_client=dns_client_mock, cache=cache, get_time=clock.time
)
servers = yield resolver.resolve_service(service_name)
@@ -168,11 +166,13 @@ class SrvResolverTestCase(unittest.TestCase):
self.assertNoResult(resolve_d)
# returning a single "." should make the lookup fail with a ConenctError
- lookup_deferred.callback((
- [dns.RRHeader(type=dns.SRV, payload=dns.Record_SRV(target=b"."))],
- None,
- None,
- ))
+ lookup_deferred.callback(
+ (
+ [dns.RRHeader(type=dns.SRV, payload=dns.Record_SRV(target=b"."))],
+ None,
+ None,
+ )
+ )
self.failureResultOf(resolve_d, ConnectError)
@@ -191,14 +191,16 @@ class SrvResolverTestCase(unittest.TestCase):
resolve_d = resolver.resolve_service(service_name)
self.assertNoResult(resolve_d)
- lookup_deferred.callback((
- [
- dns.RRHeader(type=dns.A, payload=dns.Record_A()),
- dns.RRHeader(type=dns.SRV, payload=dns.Record_SRV(target=b"host")),
- ],
- None,
- None,
- ))
+ lookup_deferred.callback(
+ (
+ [
+ dns.RRHeader(type=dns.A, payload=dns.Record_A()),
+ dns.RRHeader(type=dns.SRV, payload=dns.Record_SRV(target=b"host")),
+ ],
+ None,
+ None,
+ )
+ )
servers = self.successResultOf(resolve_d)
diff --git a/tests/http/test_fedclient.py b/tests/http/test_fedclient.py
index cd8e086f86..ee767f3a5a 100644
--- a/tests/http/test_fedclient.py
+++ b/tests/http/test_fedclient.py
@@ -15,6 +15,8 @@
from mock import Mock
+from netaddr import IPSet
+
from twisted.internet import defer
from twisted.internet.defer import TimeoutError
from twisted.internet.error import ConnectingCancelledError, DNSLookupError
@@ -36,9 +38,7 @@ from tests.unittest import HomeserverTestCase
def check_logcontext(context):
current = LoggingContext.current_context()
if current is not context:
- raise AssertionError(
- "Expected logcontext %s but was %s" % (context, current),
- )
+ raise AssertionError("Expected logcontext %s but was %s" % (context, current))
class FederationClientTests(HomeserverTestCase):
@@ -54,6 +54,7 @@ class FederationClientTests(HomeserverTestCase):
"""
happy-path test of a GET request
"""
+
@defer.inlineCallbacks
def do_request():
with LoggingContext("one") as context:
@@ -175,8 +176,7 @@ class FederationClientTests(HomeserverTestCase):
self.assertIsInstance(f.value, RequestSendFailed)
self.assertIsInstance(
- f.value.inner_exception,
- (ConnectingCancelledError, TimeoutError),
+ f.value.inner_exception, (ConnectingCancelledError, TimeoutError)
)
def test_client_connect_no_response(self):
@@ -211,14 +211,81 @@ class FederationClientTests(HomeserverTestCase):
self.assertIsInstance(f.value, RequestSendFailed)
self.assertIsInstance(f.value.inner_exception, ResponseNeverReceived)
+ def test_client_ip_range_blacklist(self):
+ """Ensure that Synapse does not try to connect to blacklisted IPs"""
+
+ # Set up the ip_range blacklist
+ self.hs.config.federation_ip_range_blacklist = IPSet([
+ "127.0.0.0/8",
+ "fe80::/64",
+ ])
+ self.reactor.lookups["internal"] = "127.0.0.1"
+ self.reactor.lookups["internalv6"] = "fe80:0:0:0:0:8a2e:370:7337"
+ self.reactor.lookups["fine"] = "10.20.30.40"
+ cl = MatrixFederationHttpClient(self.hs, None)
+
+ # Try making a GET request to a blacklisted IPv4 address
+ # ------------------------------------------------------
+ # Make the request
+ d = cl.get_json("internal:8008", "foo/bar", timeout=10000)
+
+ # Nothing happened yet
+ self.assertNoResult(d)
+
+ self.pump(1)
+
+ # Check that it was unable to resolve the address
+ clients = self.reactor.tcpClients
+ self.assertEqual(len(clients), 0)
+
+ f = self.failureResultOf(d)
+ self.assertIsInstance(f.value, RequestSendFailed)
+ self.assertIsInstance(f.value.inner_exception, DNSLookupError)
+
+ # Try making a POST request to a blacklisted IPv6 address
+ # -------------------------------------------------------
+ # Make the request
+ d = cl.post_json("internalv6:8008", "foo/bar", timeout=10000)
+
+ # Nothing has happened yet
+ self.assertNoResult(d)
+
+ # Move the reactor forwards
+ self.pump(1)
+
+ # Check that it was unable to resolve the address
+ clients = self.reactor.tcpClients
+ self.assertEqual(len(clients), 0)
+
+ # Check that it was due to a blacklisted DNS lookup
+ f = self.failureResultOf(d, RequestSendFailed)
+ self.assertIsInstance(f.value.inner_exception, DNSLookupError)
+
+ # Try making a GET request to a non-blacklisted IPv4 address
+ # ----------------------------------------------------------
+ # Make the request
+ d = cl.post_json("fine:8008", "foo/bar", timeout=10000)
+
+ # Nothing has happened yet
+ self.assertNoResult(d)
+
+ # Move the reactor forwards
+ self.pump(1)
+
+ # Check that it was able to resolve the address
+ clients = self.reactor.tcpClients
+ self.assertNotEqual(len(clients), 0)
+
+ # Connection will still fail as this IP address does not resolve to anything
+ f = self.failureResultOf(d, RequestSendFailed)
+ self.assertIsInstance(f.value.inner_exception, ConnectingCancelledError)
+
def test_client_gets_headers(self):
"""
Once the client gets the headers, _request returns successfully.
"""
request = MatrixFederationRequest(
- method="GET",
- destination="testserv:8008",
- path="foo/bar",
+ method="GET", destination="testserv:8008", path="foo/bar"
)
d = self.cl._send_request(request, timeout=10000)
@@ -258,8 +325,10 @@ class FederationClientTests(HomeserverTestCase):
# Send it the HTTP response
client.dataReceived(
- (b"HTTP/1.1 200 OK\r\nContent-Type: application/json\r\n"
- b"Server: Fake\r\n\r\n")
+ (
+ b"HTTP/1.1 200 OK\r\nContent-Type: application/json\r\n"
+ b"Server: Fake\r\n\r\n"
+ )
)
# Push by enough to time it out
@@ -274,9 +343,7 @@ class FederationClientTests(HomeserverTestCase):
requiring a trailing slash. We need to retry the request with a
trailing slash. Workaround for Synapse <= v0.99.3, explained in #3622.
"""
- d = self.cl.get_json(
- "testserv:8008", "foo/bar", try_trailing_slash_on_400=True,
- )
+ d = self.cl.get_json("testserv:8008", "foo/bar", try_trailing_slash_on_400=True)
# Send the request
self.pump()
@@ -329,9 +396,7 @@ class FederationClientTests(HomeserverTestCase):
See test_client_requires_trailing_slashes() for context.
"""
- d = self.cl.get_json(
- "testserv:8008", "foo/bar", try_trailing_slash_on_400=True,
- )
+ d = self.cl.get_json("testserv:8008", "foo/bar", try_trailing_slash_on_400=True)
# Send the request
self.pump()
@@ -368,10 +433,7 @@ class FederationClientTests(HomeserverTestCase):
self.failureResultOf(d)
def test_client_sends_body(self):
- self.cl.post_json(
- "testserv:8008", "foo/bar", timeout=10000,
- data={"a": "b"}
- )
+ self.cl.post_json("testserv:8008", "foo/bar", timeout=10000, data={"a": "b"})
self.pump()
diff --git a/tests/patch_inline_callbacks.py b/tests/patch_inline_callbacks.py
index 0f613945c8..ee0add3455 100644
--- a/tests/patch_inline_callbacks.py
+++ b/tests/patch_inline_callbacks.py
@@ -45,7 +45,9 @@ def do_patch():
except Exception:
if LoggingContext.current_context() != start_context:
err = "%s changed context from %s to %s on exception" % (
- f, start_context, LoggingContext.current_context()
+ f,
+ start_context,
+ LoggingContext.current_context(),
)
print(err, file=sys.stderr)
raise Exception(err)
@@ -54,7 +56,9 @@ def do_patch():
if not isinstance(res, Deferred) or res.called:
if LoggingContext.current_context() != start_context:
err = "%s changed context from %s to %s" % (
- f, start_context, LoggingContext.current_context()
+ f,
+ start_context,
+ LoggingContext.current_context(),
)
# print the error to stderr because otherwise all we
# see in travis-ci is the 500 error
@@ -66,9 +70,7 @@ def do_patch():
err = (
"%s returned incomplete deferred in non-sentinel context "
"%s (start was %s)"
- ) % (
- f, LoggingContext.current_context(), start_context,
- )
+ ) % (f, LoggingContext.current_context(), start_context)
print(err, file=sys.stderr)
raise Exception(err)
@@ -76,7 +78,9 @@ def do_patch():
if LoggingContext.current_context() != start_context:
err = "%s completion of %s changed context from %s to %s" % (
"Failure" if isinstance(r, Failure) else "Success",
- f, start_context, LoggingContext.current_context(),
+ f,
+ start_context,
+ LoggingContext.current_context(),
)
print(err, file=sys.stderr)
raise Exception(err)
diff --git a/tests/push/test_email.py b/tests/push/test_email.py
index 325ea449ae..9cdde1a9bd 100644
--- a/tests/push/test_email.py
+++ b/tests/push/test_email.py
@@ -52,22 +52,26 @@ class EmailPusherTests(HomeserverTestCase):
return d
config = self.default_config()
- config.email_enable_notifs = True
- config.start_pushers = True
-
- config.email_template_dir = os.path.abspath(
- pkg_resources.resource_filename('synapse', 'res/templates')
- )
- config.email_notif_template_html = "notif_mail.html"
- config.email_notif_template_text = "notif_mail.txt"
- config.email_smtp_host = "127.0.0.1"
- config.email_smtp_port = 20
- config.require_transport_security = False
- config.email_smtp_user = None
- config.email_smtp_pass = None
- config.email_app_name = "Matrix"
- config.email_notif_from = "test@example.com"
- config.email_riot_base_url = None
+ config["email"] = {
+ "enable_notifs": True,
+ "template_dir": os.path.abspath(
+ pkg_resources.resource_filename('synapse', 'res/templates')
+ ),
+ "expiry_template_html": "notice_expiry.html",
+ "expiry_template_text": "notice_expiry.txt",
+ "notif_template_html": "notif_mail.html",
+ "notif_template_text": "notif_mail.txt",
+ "smtp_host": "127.0.0.1",
+ "smtp_port": 20,
+ "require_transport_security": False,
+ "smtp_user": None,
+ "smtp_pass": None,
+ "app_name": "Matrix",
+ "notif_from": "test@example.com",
+ "riot_base_url": None,
+ }
+ config["public_baseurl"] = "aaa"
+ config["start_pushers"] = True
hs = self.setup_test_homeserver(config=config, sendmail=sendmail)
diff --git a/tests/push/test_http.py b/tests/push/test_http.py
index 13bd2c8688..aba618b2be 100644
--- a/tests/push/test_http.py
+++ b/tests/push/test_http.py
@@ -54,7 +54,7 @@ class HTTPPusherTests(HomeserverTestCase):
m.post_json_get_json = post_json_get_json
config = self.default_config()
- config.start_pushers = True
+ config["start_pushers"] = True
hs = self.setup_test_homeserver(config=config, simple_http_client=m)
diff --git a/tests/replication/slave/storage/_base.py b/tests/replication/slave/storage/_base.py
index 1f72a2a04f..104349cdbd 100644
--- a/tests/replication/slave/storage/_base.py
+++ b/tests/replication/slave/storage/_base.py
@@ -74,21 +74,18 @@ class BaseSlavedStoreTestCase(unittest.HomeserverTestCase):
self.assertEqual(
master_result,
expected_result,
- "Expected master result to be %r but was %r" % (
- expected_result, master_result
- ),
+ "Expected master result to be %r but was %r"
+ % (expected_result, master_result),
)
self.assertEqual(
slaved_result,
expected_result,
- "Expected slave result to be %r but was %r" % (
- expected_result, slaved_result
- ),
+ "Expected slave result to be %r but was %r"
+ % (expected_result, slaved_result),
)
self.assertEqual(
master_result,
slaved_result,
- "Slave result %r does not match master result %r" % (
- slaved_result, master_result
- ),
+ "Slave result %r does not match master result %r"
+ % (slaved_result, master_result),
)
diff --git a/tests/replication/slave/storage/test_events.py b/tests/replication/slave/storage/test_events.py
index 65ecff3bd6..a368117b43 100644
--- a/tests/replication/slave/storage/test_events.py
+++ b/tests/replication/slave/storage/test_events.py
@@ -234,10 +234,7 @@ class SlavedEventStoreTestCase(BaseSlavedStoreTestCase):
type="m.room.member", sender=USER_ID_2, key=USER_ID_2, membership="join"
)
msg, msgctx = self.build_event()
- self.get_success(self.master_store.persist_events([
- (j2, j2ctx),
- (msg, msgctx),
- ]))
+ self.get_success(self.master_store.persist_events([(j2, j2ctx), (msg, msgctx)]))
self.replicate()
event_source = RoomEventSource(self.hs)
@@ -257,15 +254,13 @@ class SlavedEventStoreTestCase(BaseSlavedStoreTestCase):
#
# First, we get a list of the rooms we are joined to
joined_rooms = self.get_success(
- self.slaved_store.get_rooms_for_user_with_stream_ordering(
- USER_ID_2,
- ),
+ self.slaved_store.get_rooms_for_user_with_stream_ordering(USER_ID_2)
)
# Then, we get a list of the events since the last sync
membership_changes = self.get_success(
self.slaved_store.get_membership_changes_for_user(
- USER_ID_2, prev_token, current_token,
+ USER_ID_2, prev_token, current_token
)
)
@@ -298,9 +293,7 @@ class SlavedEventStoreTestCase(BaseSlavedStoreTestCase):
self.master_store.persist_events([(event, context)], backfilled=True)
)
else:
- self.get_success(
- self.master_store.persist_event(event, context)
- )
+ self.get_success(self.master_store.persist_event(event, context))
return event
@@ -359,9 +352,7 @@ class SlavedEventStoreTestCase(BaseSlavedStoreTestCase):
)
else:
state_handler = self.hs.get_state_handler()
- context = self.get_success(state_handler.compute_event_context(
- event
- ))
+ context = self.get_success(state_handler.compute_event_context(event))
self.master_store.add_push_actions_to_staging(
event.event_id, {user_id: actions for user_id, actions in push_actions}
diff --git a/tests/replication/tcp/streams/_base.py b/tests/replication/tcp/streams/_base.py
index 38b368a972..ce3835ae6a 100644
--- a/tests/replication/tcp/streams/_base.py
+++ b/tests/replication/tcp/streams/_base.py
@@ -22,6 +22,7 @@ from tests.server import FakeTransport
class BaseStreamTestCase(unittest.HomeserverTestCase):
"""Base class for tests of the replication streams"""
+
def prepare(self, reactor, clock, hs):
# build a replication server
server_factory = ReplicationStreamProtocolFactory(self.hs)
@@ -52,6 +53,7 @@ class BaseStreamTestCase(unittest.HomeserverTestCase):
class TestReplicationClientHandler(object):
"""Drop-in for ReplicationClientHandler which just collects RDATA rows"""
+
def __init__(self):
self.received_rdata_rows = []
@@ -69,6 +71,4 @@ class TestReplicationClientHandler(object):
def on_rdata(self, stream_name, token, rows):
for r in rows:
- self.received_rdata_rows.append(
- (stream_name, token, r)
- )
+ self.received_rdata_rows.append((stream_name, token, r))
diff --git a/tests/rest/admin/test_admin.py b/tests/rest/admin/test_admin.py
index da19a83918..ee5f09041f 100644
--- a/tests/rest/admin/test_admin.py
+++ b/tests/rest/admin/test_admin.py
@@ -41,10 +41,10 @@ class VersionTestCase(unittest.HomeserverTestCase):
request, channel = self.make_request("GET", self.url, shorthand=False)
self.render(request)
- self.assertEqual(200, int(channel.result["code"]),
- msg=channel.result["body"])
- self.assertEqual({'server_version', 'python_version'},
- set(channel.json_body.keys()))
+ self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual(
+ {'server_version', 'python_version'}, set(channel.json_body.keys())
+ )
class UserRegisterTestCase(unittest.HomeserverTestCase):
@@ -200,9 +200,7 @@ class UserRegisterTestCase(unittest.HomeserverTestCase):
nonce = channel.json_body["nonce"]
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
- want_mac.update(
- nonce.encode('ascii') + b"\x00bob\x00abc123\x00admin"
- )
+ want_mac.update(nonce.encode('ascii') + b"\x00bob\x00abc123\x00admin")
want_mac = want_mac.hexdigest()
body = json.dumps(
@@ -330,11 +328,13 @@ class UserRegisterTestCase(unittest.HomeserverTestCase):
#
# Invalid user_type
- body = json.dumps({
- "nonce": nonce(),
- "username": "a",
- "password": "1234",
- "user_type": "invalid"}
+ body = json.dumps(
+ {
+ "nonce": nonce(),
+ "username": "a",
+ "password": "1234",
+ "user_type": "invalid",
+ }
)
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
self.render(request)
@@ -357,9 +357,7 @@ class ShutdownRoomTestCase(unittest.HomeserverTestCase):
hs.config.user_consent_version = "1"
consent_uri_builder = Mock()
- consent_uri_builder.build_user_consent_uri.return_value = (
- "http://example.com"
- )
+ consent_uri_builder.build_user_consent_uri.return_value = "http://example.com"
self.event_creation_handler._consent_uri_builder = consent_uri_builder
self.store = hs.get_datastore()
@@ -371,9 +369,7 @@ class ShutdownRoomTestCase(unittest.HomeserverTestCase):
self.other_user_token = self.login("user", "pass")
# Mark the admin user as having consented
- self.get_success(
- self.store.user_set_consent_version(self.admin_user, "1"),
- )
+ self.get_success(self.store.user_set_consent_version(self.admin_user, "1"))
def test_shutdown_room_consent(self):
"""Test that we can shutdown rooms with local users who have not
@@ -385,9 +381,7 @@ class ShutdownRoomTestCase(unittest.HomeserverTestCase):
room_id = self.helper.create_room_as(self.other_user, tok=self.other_user_token)
# Assert one user in room
- users_in_room = self.get_success(
- self.store.get_users_in_room(room_id),
- )
+ users_in_room = self.get_success(self.store.get_users_in_room(room_id))
self.assertEqual([self.other_user], users_in_room)
# Enable require consent to send events
@@ -395,8 +389,7 @@ class ShutdownRoomTestCase(unittest.HomeserverTestCase):
# Assert that the user is getting consent error
self.helper.send(
- room_id,
- body="foo", tok=self.other_user_token, expect_code=403,
+ room_id, body="foo", tok=self.other_user_token, expect_code=403
)
# Test that the admin can still send shutdown
@@ -412,9 +405,7 @@ class ShutdownRoomTestCase(unittest.HomeserverTestCase):
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
# Assert there is now no longer anyone in the room
- users_in_room = self.get_success(
- self.store.get_users_in_room(room_id),
- )
+ users_in_room = self.get_success(self.store.get_users_in_room(room_id))
self.assertEqual([], users_in_room)
@unittest.DEBUG
@@ -459,24 +450,20 @@ class ShutdownRoomTestCase(unittest.HomeserverTestCase):
url = "rooms/%s/initialSync" % (room_id,)
request, channel = self.make_request(
- "GET",
- url.encode('ascii'),
- access_token=self.admin_user_tok,
+ "GET", url.encode('ascii'), access_token=self.admin_user_tok
)
self.render(request)
self.assertEqual(
- expect_code, int(channel.result["code"]), msg=channel.result["body"],
+ expect_code, int(channel.result["code"]), msg=channel.result["body"]
)
url = "events?timeout=0&room_id=" + room_id
request, channel = self.make_request(
- "GET",
- url.encode('ascii'),
- access_token=self.admin_user_tok,
+ "GET", url.encode('ascii'), access_token=self.admin_user_tok
)
self.render(request)
self.assertEqual(
- expect_code, int(channel.result["code"]), msg=channel.result["body"],
+ expect_code, int(channel.result["code"]), msg=channel.result["body"]
)
@@ -502,15 +489,11 @@ class DeleteGroupTestCase(unittest.HomeserverTestCase):
"POST",
"/create_group".encode('ascii'),
access_token=self.admin_user_tok,
- content={
- "localpart": "test",
- }
+ content={"localpart": "test"},
)
self.render(request)
- self.assertEqual(
- 200, int(channel.result["code"]), msg=channel.result["body"],
- )
+ self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
group_id = channel.json_body["group_id"]
@@ -520,27 +503,17 @@ class DeleteGroupTestCase(unittest.HomeserverTestCase):
url = "/groups/%s/admin/users/invite/%s" % (group_id, self.other_user)
request, channel = self.make_request(
- "PUT",
- url.encode('ascii'),
- access_token=self.admin_user_tok,
- content={}
+ "PUT", url.encode('ascii'), access_token=self.admin_user_tok, content={}
)
self.render(request)
- self.assertEqual(
- 200, int(channel.result["code"]), msg=channel.result["body"],
- )
+ self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
url = "/groups/%s/self/accept_invite" % (group_id,)
request, channel = self.make_request(
- "PUT",
- url.encode('ascii'),
- access_token=self.other_user_token,
- content={}
+ "PUT", url.encode('ascii'), access_token=self.other_user_token, content={}
)
self.render(request)
- self.assertEqual(
- 200, int(channel.result["code"]), msg=channel.result["body"],
- )
+ self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
# Check other user knows they're in the group
self.assertIn(group_id, self._get_groups_user_is_in(self.admin_user_tok))
@@ -552,15 +525,11 @@ class DeleteGroupTestCase(unittest.HomeserverTestCase):
"POST",
url.encode('ascii'),
access_token=self.admin_user_tok,
- content={
- "localpart": "test",
- }
+ content={"localpart": "test"},
)
self.render(request)
- self.assertEqual(
- 200, int(channel.result["code"]), msg=channel.result["body"],
- )
+ self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
# Check group returns 404
self._check_group(group_id, expect_code=404)
@@ -576,28 +545,22 @@ class DeleteGroupTestCase(unittest.HomeserverTestCase):
url = "/groups/%s/profile" % (group_id,)
request, channel = self.make_request(
- "GET",
- url.encode('ascii'),
- access_token=self.admin_user_tok,
+ "GET", url.encode('ascii'), access_token=self.admin_user_tok
)
self.render(request)
self.assertEqual(
- expect_code, int(channel.result["code"]), msg=channel.result["body"],
+ expect_code, int(channel.result["code"]), msg=channel.result["body"]
)
def _get_groups_user_is_in(self, access_token):
"""Returns the list of groups the user is in (given their access token)
"""
request, channel = self.make_request(
- "GET",
- "/joined_groups".encode('ascii'),
- access_token=access_token,
+ "GET", "/joined_groups".encode('ascii'), access_token=access_token
)
self.render(request)
- self.assertEqual(
- 200, int(channel.result["code"]), msg=channel.result["body"],
- )
+ self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
return channel.json_body["groups"]
diff --git a/tests/rest/client/test_consent.py b/tests/rest/client/test_consent.py
index 5528971190..88f8f1abdc 100644
--- a/tests/rest/client/test_consent.py
+++ b/tests/rest/client/test_consent.py
@@ -42,15 +42,18 @@ class ConsentResourceTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
- config.user_consent_version = "1"
- config.public_baseurl = ""
- config.form_secret = "123abc"
+ config["public_baseurl"] = "aaaa"
+ config["form_secret"] = "123abc"
# Make some temporary templates...
temp_consent_path = self.mktemp()
os.mkdir(temp_consent_path)
os.mkdir(os.path.join(temp_consent_path, 'en'))
- config.user_consent_template_dir = os.path.abspath(temp_consent_path)
+
+ config["user_consent"] = {
+ "version": "1",
+ "template_dir": os.path.abspath(temp_consent_path),
+ }
with open(os.path.join(temp_consent_path, "en/1.html"), 'w') as f:
f.write("{{version}},{{has_consented}}")
diff --git a/tests/rest/client/test_identity.py b/tests/rest/client/test_identity.py
index 2e51ffa418..68949307d9 100644
--- a/tests/rest/client/test_identity.py
+++ b/tests/rest/client/test_identity.py
@@ -32,7 +32,7 @@ class IdentityTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
- config.enable_3pid_lookup = False
+ config["enable_3pid_lookup"] = False
self.hs = self.setup_test_homeserver(config=config)
return self.hs
@@ -44,7 +44,7 @@ class IdentityTestCase(unittest.HomeserverTestCase):
tok = self.login("kermit", "monkey")
request, channel = self.make_request(
- b"POST", "/createRoom", b"{}", access_token=tok,
+ b"POST", "/createRoom", b"{}", access_token=tok
)
self.render(request)
self.assertEquals(channel.result["code"], b"200", channel.result)
@@ -56,11 +56,9 @@ class IdentityTestCase(unittest.HomeserverTestCase):
"address": "test@example.com",
}
request_data = json.dumps(params)
- request_url = (
- "/rooms/%s/invite" % (room_id)
- ).encode('ascii')
+ request_url = ("/rooms/%s/invite" % (room_id)).encode('ascii')
request, channel = self.make_request(
- b"POST", request_url, request_data, access_token=tok,
+ b"POST", request_url, request_data, access_token=tok
)
self.render(request)
self.assertEquals(channel.result["code"], b"403", channel.result)
diff --git a/tests/rest/client/v1/test_directory.py b/tests/rest/client/v1/test_directory.py
index f63c68e7ed..633b7dbda0 100644
--- a/tests/rest/client/v1/test_directory.py
+++ b/tests/rest/client/v1/test_directory.py
@@ -34,7 +34,7 @@ class DirectoryTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
- config.require_membership_for_aliases = True
+ config["require_membership_for_aliases"] = True
self.hs = self.setup_test_homeserver(config=config)
@@ -45,7 +45,7 @@ class DirectoryTestCase(unittest.HomeserverTestCase):
self.room_owner_tok = self.login("room_owner", "test")
self.room_id = self.helper.create_room_as(
- self.room_owner, tok=self.room_owner_tok,
+ self.room_owner, tok=self.room_owner_tok
)
self.user = self.register_user("user", "test")
@@ -80,12 +80,10 @@ class DirectoryTestCase(unittest.HomeserverTestCase):
# We use deliberately a localpart under the length threshold so
# that we can make sure that the check is done on the whole alias.
- data = {
- "room_alias_name": random_string(256 - len(self.hs.hostname)),
- }
+ data = {"room_alias_name": random_string(256 - len(self.hs.hostname))}
request_data = json.dumps(data)
request, channel = self.make_request(
- "POST", url, request_data, access_token=self.user_tok,
+ "POST", url, request_data, access_token=self.user_tok
)
self.render(request)
self.assertEqual(channel.code, 400, channel.result)
@@ -96,51 +94,42 @@ class DirectoryTestCase(unittest.HomeserverTestCase):
# Check with an alias of allowed length. There should already be
# a test that ensures it works in test_register.py, but let's be
# as cautious as possible here.
- data = {
- "room_alias_name": random_string(5),
- }
+ data = {"room_alias_name": random_string(5)}
request_data = json.dumps(data)
request, channel = self.make_request(
- "POST", url, request_data, access_token=self.user_tok,
+ "POST", url, request_data, access_token=self.user_tok
)
self.render(request)
self.assertEqual(channel.code, 200, channel.result)
def set_alias_via_state_event(self, expected_code, alias_length=5):
- url = ("/_matrix/client/r0/rooms/%s/state/m.room.aliases/%s"
- % (self.room_id, self.hs.hostname))
-
- data = {
- "aliases": [
- self.random_alias(alias_length),
- ],
- }
+ url = "/_matrix/client/r0/rooms/%s/state/m.room.aliases/%s" % (
+ self.room_id,
+ self.hs.hostname,
+ )
+
+ data = {"aliases": [self.random_alias(alias_length)]}
request_data = json.dumps(data)
request, channel = self.make_request(
- "PUT", url, request_data, access_token=self.user_tok,
+ "PUT", url, request_data, access_token=self.user_tok
)
self.render(request)
self.assertEqual(channel.code, expected_code, channel.result)
def set_alias_via_directory(self, expected_code, alias_length=5):
url = "/_matrix/client/r0/directory/room/%s" % self.random_alias(alias_length)
- data = {
- "room_id": self.room_id,
- }
+ data = {"room_id": self.room_id}
request_data = json.dumps(data)
request, channel = self.make_request(
- "PUT", url, request_data, access_token=self.user_tok,
+ "PUT", url, request_data, access_token=self.user_tok
)
self.render(request)
self.assertEqual(channel.code, expected_code, channel.result)
def random_alias(self, length):
- return RoomAlias(
- random_string(length),
- self.hs.hostname,
- ).to_string()
+ return RoomAlias(random_string(length), self.hs.hostname).to_string()
def ensure_user_left_room(self):
self.ensure_membership("leave")
@@ -151,17 +140,9 @@ class DirectoryTestCase(unittest.HomeserverTestCase):
def ensure_membership(self, membership):
try:
if membership == "leave":
- self.helper.leave(
- room=self.room_id,
- user=self.user,
- tok=self.user_tok,
- )
+ self.helper.leave(room=self.room_id, user=self.user, tok=self.user_tok)
if membership == "join":
- self.helper.join(
- room=self.room_id,
- user=self.user,
- tok=self.user_tok,
- )
+ self.helper.join(room=self.room_id, user=self.user, tok=self.user_tok)
except AssertionError:
# We don't care whether the leave request didn't return a 200 (e.g.
# if the user isn't already in the room), because we only want to
diff --git a/tests/rest/client/v1/test_events.py b/tests/rest/client/v1/test_events.py
index 8a9a55a527..f340b7e851 100644
--- a/tests/rest/client/v1/test_events.py
+++ b/tests/rest/client/v1/test_events.py
@@ -36,9 +36,9 @@ class EventStreamPermissionsTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
- config.enable_registration_captcha = False
- config.enable_registration = True
- config.auto_join_rooms = []
+ config["enable_registration_captcha"] = False
+ config["enable_registration"] = True
+ config["auto_join_rooms"] = []
hs = self.setup_test_homeserver(
config=config, ratelimiter=NonCallableMock(spec_set=["can_do_action"])
diff --git a/tests/rest/client/v1/test_login.py b/tests/rest/client/v1/test_login.py
index 9ebd91f678..0397f91a9e 100644
--- a/tests/rest/client/v1/test_login.py
+++ b/tests/rest/client/v1/test_login.py
@@ -37,10 +37,7 @@ class LoginRestServletTestCase(unittest.HomeserverTestCase):
for i in range(0, 6):
params = {
"type": "m.login.password",
- "identifier": {
- "type": "m.id.user",
- "user": "kermit" + str(i),
- },
+ "identifier": {"type": "m.id.user", "user": "kermit" + str(i)},
"password": "monkey",
}
request_data = json.dumps(params)
@@ -57,14 +54,11 @@ class LoginRestServletTestCase(unittest.HomeserverTestCase):
# than 1min.
self.assertTrue(retry_after_ms < 6000)
- self.reactor.advance(retry_after_ms / 1000.)
+ self.reactor.advance(retry_after_ms / 1000.0)
params = {
"type": "m.login.password",
- "identifier": {
- "type": "m.id.user",
- "user": "kermit" + str(i),
- },
+ "identifier": {"type": "m.id.user", "user": "kermit" + str(i)},
"password": "monkey",
}
request_data = json.dumps(params)
@@ -82,10 +76,7 @@ class LoginRestServletTestCase(unittest.HomeserverTestCase):
for i in range(0, 6):
params = {
"type": "m.login.password",
- "identifier": {
- "type": "m.id.user",
- "user": "kermit",
- },
+ "identifier": {"type": "m.id.user", "user": "kermit"},
"password": "monkey",
}
request_data = json.dumps(params)
@@ -102,14 +93,11 @@ class LoginRestServletTestCase(unittest.HomeserverTestCase):
# than 1min.
self.assertTrue(retry_after_ms < 6000)
- self.reactor.advance(retry_after_ms / 1000.)
+ self.reactor.advance(retry_after_ms / 1000.0)
params = {
"type": "m.login.password",
- "identifier": {
- "type": "m.id.user",
- "user": "kermit",
- },
+ "identifier": {"type": "m.id.user", "user": "kermit"},
"password": "monkey",
}
request_data = json.dumps(params)
@@ -127,10 +115,7 @@ class LoginRestServletTestCase(unittest.HomeserverTestCase):
for i in range(0, 6):
params = {
"type": "m.login.password",
- "identifier": {
- "type": "m.id.user",
- "user": "kermit",
- },
+ "identifier": {"type": "m.id.user", "user": "kermit"},
"password": "notamonkey",
}
request_data = json.dumps(params)
@@ -147,14 +132,11 @@ class LoginRestServletTestCase(unittest.HomeserverTestCase):
# than 1min.
self.assertTrue(retry_after_ms < 6000)
- self.reactor.advance(retry_after_ms / 1000.)
+ self.reactor.advance(retry_after_ms / 1000.0)
params = {
"type": "m.login.password",
- "identifier": {
- "type": "m.id.user",
- "user": "kermit",
- },
+ "identifier": {"type": "m.id.user", "user": "kermit"},
"password": "notamonkey",
}
request_data = json.dumps(params)
diff --git a/tests/rest/client/v1/test_profile.py b/tests/rest/client/v1/test_profile.py
index 7306e61b7c..769c37ce52 100644
--- a/tests/rest/client/v1/test_profile.py
+++ b/tests/rest/client/v1/test_profile.py
@@ -171,7 +171,7 @@ class ProfilesRestrictedTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
- config.require_auth_for_profile_requests = True
+ config["require_auth_for_profile_requests"] = True
self.hs = self.setup_test_homeserver(config=config)
return self.hs
@@ -199,37 +199,24 @@ class ProfilesRestrictedTestCase(unittest.HomeserverTestCase):
def test_in_shared_room(self):
self.ensure_requester_left_room()
- self.helper.join(
- room=self.room_id,
- user=self.requester,
- tok=self.requester_tok,
- )
+ self.helper.join(room=self.room_id, user=self.requester, tok=self.requester_tok)
self.try_fetch_profile(200, self.requester_tok)
def try_fetch_profile(self, expected_code, access_token=None):
- self.request_profile(
- expected_code,
- access_token=access_token
- )
+ self.request_profile(expected_code, access_token=access_token)
self.request_profile(
- expected_code,
- url_suffix="/displayname",
- access_token=access_token,
+ expected_code, url_suffix="/displayname", access_token=access_token
)
self.request_profile(
- expected_code,
- url_suffix="/avatar_url",
- access_token=access_token,
+ expected_code, url_suffix="/avatar_url", access_token=access_token
)
def request_profile(self, expected_code, url_suffix="", access_token=None):
request, channel = self.make_request(
- "GET",
- self.profile_url + url_suffix,
- access_token=access_token,
+ "GET", self.profile_url + url_suffix, access_token=access_token
)
self.render(request)
self.assertEqual(channel.code, expected_code, channel.result)
@@ -237,9 +224,7 @@ class ProfilesRestrictedTestCase(unittest.HomeserverTestCase):
def ensure_requester_left_room(self):
try:
self.helper.leave(
- room=self.room_id,
- user=self.requester,
- tok=self.requester_tok,
+ room=self.room_id, user=self.requester, tok=self.requester_tok
)
except AssertionError:
# We don't care whether the leave request didn't return a 200 (e.g.
diff --git a/tests/rest/client/v1/test_rooms.py b/tests/rest/client/v1/test_rooms.py
index 9b191436cc..6220172cde 100644
--- a/tests/rest/client/v1/test_rooms.py
+++ b/tests/rest/client/v1/test_rooms.py
@@ -919,7 +919,7 @@ class PublicRoomsRestrictedTestCase(unittest.HomeserverTestCase):
self.url = b"/_matrix/client/r0/publicRooms"
config = self.default_config()
- config.restrict_public_rooms_to_local_users = True
+ config["restrict_public_rooms_to_local_users"] = True
self.hs = self.setup_test_homeserver(config=config)
return self.hs
diff --git a/tests/rest/client/v2_alpha/test_auth.py b/tests/rest/client/v2_alpha/test_auth.py
index 0ca3c4657b..ad7d476401 100644
--- a/tests/rest/client/v2_alpha/test_auth.py
+++ b/tests/rest/client/v2_alpha/test_auth.py
@@ -36,9 +36,9 @@ class FallbackAuthTests(unittest.HomeserverTestCase):
config = self.default_config()
- config.enable_registration_captcha = True
- config.recaptcha_public_key = "brokencake"
- config.registrations_require_3pid = []
+ config["enable_registration_captcha"] = True
+ config["recaptcha_public_key"] = "brokencake"
+ config["registrations_require_3pid"] = []
hs = self.setup_test_homeserver(config=config)
return hs
diff --git a/tests/rest/client/v2_alpha/test_register.py b/tests/rest/client/v2_alpha/test_register.py
index 1c3a621d26..65685883db 100644
--- a/tests/rest/client/v2_alpha/test_register.py
+++ b/tests/rest/client/v2_alpha/test_register.py
@@ -41,11 +41,10 @@ class RegisterRestServletTestCase(unittest.HomeserverTestCase):
as_token = "i_am_an_app_service"
appservice = ApplicationService(
- as_token, self.hs.config.server_name,
+ as_token,
+ self.hs.config.server_name,
id="1234",
- namespaces={
- "users": [{"regex": r"@as_user.*", "exclusive": True}],
- },
+ namespaces={"users": [{"regex": r"@as_user.*", "exclusive": True}]},
)
self.hs.get_datastore().services_cache.append(appservice)
@@ -57,10 +56,7 @@ class RegisterRestServletTestCase(unittest.HomeserverTestCase):
self.render(request)
self.assertEquals(channel.result["code"], b"200", channel.result)
- det_data = {
- "user_id": user_id,
- "home_server": self.hs.hostname,
- }
+ det_data = {"user_id": user_id, "home_server": self.hs.hostname}
self.assertDictContainsSubset(det_data, channel.json_body)
def test_POST_appservice_registration_invalid(self):
@@ -128,10 +124,7 @@ class RegisterRestServletTestCase(unittest.HomeserverTestCase):
request, channel = self.make_request(b"POST", self.url + b"?kind=guest", b"{}")
self.render(request)
- det_data = {
- "home_server": self.hs.hostname,
- "device_id": "guest_device",
- }
+ det_data = {"home_server": self.hs.hostname, "device_id": "guest_device"}
self.assertEquals(channel.result["code"], b"200", channel.result)
self.assertDictContainsSubset(det_data, channel.json_body)
@@ -159,7 +152,7 @@ class RegisterRestServletTestCase(unittest.HomeserverTestCase):
else:
self.assertEquals(channel.result["code"], b"200", channel.result)
- self.reactor.advance(retry_after_ms / 1000.)
+ self.reactor.advance(retry_after_ms / 1000.0)
request, channel = self.make_request(b"POST", self.url + b"?kind=guest", b"{}")
self.render(request)
@@ -187,7 +180,7 @@ class RegisterRestServletTestCase(unittest.HomeserverTestCase):
else:
self.assertEquals(channel.result["code"], b"200", channel.result)
- self.reactor.advance(retry_after_ms / 1000.)
+ self.reactor.advance(retry_after_ms / 1000.0)
request, channel = self.make_request(b"POST", self.url + b"?kind=guest", b"{}")
self.render(request)
@@ -208,9 +201,11 @@ class AccountValidityTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
# Test for account expiring after a week.
- config.enable_registration = True
- config.account_validity.enabled = True
- config.account_validity.period = 604800000 # Time in ms for 1 week
+ config["enable_registration"] = True
+ config["account_validity"] = {
+ "enabled": True,
+ "period": 604800000, # Time in ms for 1 week
+ }
self.hs = self.setup_test_homeserver(config=config)
return self.hs
@@ -221,23 +216,19 @@ class AccountValidityTestCase(unittest.HomeserverTestCase):
# The specific endpoint doesn't matter, all we need is an authenticated
# endpoint.
- request, channel = self.make_request(
- b"GET", "/sync", access_token=tok,
- )
+ request, channel = self.make_request(b"GET", "/sync", access_token=tok)
self.render(request)
self.assertEquals(channel.result["code"], b"200", channel.result)
self.reactor.advance(datetime.timedelta(weeks=1).total_seconds())
- request, channel = self.make_request(
- b"GET", "/sync", access_token=tok,
- )
+ request, channel = self.make_request(b"GET", "/sync", access_token=tok)
self.render(request)
self.assertEquals(channel.result["code"], b"403", channel.result)
self.assertEquals(
- channel.json_body["errcode"], Codes.EXPIRED_ACCOUNT, channel.result,
+ channel.json_body["errcode"], Codes.EXPIRED_ACCOUNT, channel.result
)
def test_manual_renewal(self):
@@ -253,21 +244,17 @@ class AccountValidityTestCase(unittest.HomeserverTestCase):
admin_tok = self.login("admin", "adminpassword")
url = "/_matrix/client/unstable/admin/account_validity/validity"
- params = {
- "user_id": user_id,
- }
+ params = {"user_id": user_id}
request_data = json.dumps(params)
request, channel = self.make_request(
- b"POST", url, request_data, access_token=admin_tok,
+ b"POST", url, request_data, access_token=admin_tok
)
self.render(request)
self.assertEquals(channel.result["code"], b"200", channel.result)
# The specific endpoint doesn't matter, all we need is an authenticated
# endpoint.
- request, channel = self.make_request(
- b"GET", "/sync", access_token=tok,
- )
+ request, channel = self.make_request(b"GET", "/sync", access_token=tok)
self.render(request)
self.assertEquals(channel.result["code"], b"200", channel.result)
@@ -286,20 +273,18 @@ class AccountValidityTestCase(unittest.HomeserverTestCase):
}
request_data = json.dumps(params)
request, channel = self.make_request(
- b"POST", url, request_data, access_token=admin_tok,
+ b"POST", url, request_data, access_token=admin_tok
)
self.render(request)
self.assertEquals(channel.result["code"], b"200", channel.result)
# The specific endpoint doesn't matter, all we need is an authenticated
# endpoint.
- request, channel = self.make_request(
- b"GET", "/sync", access_token=tok,
- )
+ request, channel = self.make_request(b"GET", "/sync", access_token=tok)
self.render(request)
self.assertEquals(channel.result["code"], b"403", channel.result)
self.assertEquals(
- channel.json_body["errcode"], Codes.EXPIRED_ACCOUNT, channel.result,
+ channel.json_body["errcode"], Codes.EXPIRED_ACCOUNT, channel.result
)
@@ -316,14 +301,17 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
+
# Test for account expiring after a week and renewal emails being sent 2
# days before expiry.
- config.enable_registration = True
- config.account_validity.enabled = True
- config.account_validity.renew_by_email_enabled = True
- config.account_validity.period = 604800000 # Time in ms for 1 week
- config.account_validity.renew_at = 172800000 # Time in ms for 2 days
- config.account_validity.renew_email_subject = "Renew your account"
+ config["enable_registration"] = True
+ config["account_validity"] = {
+ "enabled": True,
+ "period": 604800000, # Time in ms for 1 week
+ "renew_at": 172800000, # Time in ms for 2 days
+ "renew_by_email_enabled": True,
+ "renew_email_subject": "Renew your account",
+ }
# Email config.
self.email_attempts = []
@@ -332,17 +320,23 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase):
self.email_attempts.append((args, kwargs))
return
- config.email_template_dir = os.path.abspath(
- pkg_resources.resource_filename('synapse', 'res/templates')
- )
- config.email_expiry_template_html = "notice_expiry.html"
- config.email_expiry_template_text = "notice_expiry.txt"
- config.email_smtp_host = "127.0.0.1"
- config.email_smtp_port = 20
- config.require_transport_security = False
- config.email_smtp_user = None
- config.email_smtp_pass = None
- config.email_notif_from = "test@example.com"
+ config["email"] = {
+ "enable_notifs": True,
+ "template_dir": os.path.abspath(
+ pkg_resources.resource_filename('synapse', 'res/templates')
+ ),
+ "expiry_template_html": "notice_expiry.html",
+ "expiry_template_text": "notice_expiry.txt",
+ "notif_template_html": "notif_mail.html",
+ "notif_template_text": "notif_mail.txt",
+ "smtp_host": "127.0.0.1",
+ "smtp_port": 20,
+ "require_transport_security": False,
+ "smtp_user": None,
+ "smtp_pass": None,
+ "notif_from": "test@example.com",
+ }
+ config["public_baseurl"] = "aaa"
self.hs = self.setup_test_homeserver(config=config, sendmail=sendmail)
@@ -358,10 +352,15 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase):
# We need to manually add an email address otherwise the handler will do
# nothing.
now = self.hs.clock.time_msec()
- self.get_success(self.store.user_add_threepid(
- user_id=user_id, medium="email", address="kermit@example.com",
- validated_at=now, added_at=now,
- ))
+ self.get_success(
+ self.store.user_add_threepid(
+ user_id=user_id,
+ medium="email",
+ address="kermit@example.com",
+ validated_at=now,
+ added_at=now,
+ )
+ )
# Move 6 days forward. This should trigger a renewal email to be sent.
self.reactor.advance(datetime.timedelta(days=6).total_seconds())
@@ -379,9 +378,7 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase):
# our access token should be denied from now, otherwise they should
# succeed.
self.reactor.advance(datetime.timedelta(days=3).total_seconds())
- request, channel = self.make_request(
- b"GET", "/sync", access_token=tok,
- )
+ request, channel = self.make_request(b"GET", "/sync", access_token=tok)
self.render(request)
self.assertEquals(channel.result["code"], b"200", channel.result)
@@ -393,13 +390,19 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase):
# We need to manually add an email address otherwise the handler will do
# nothing.
now = self.hs.clock.time_msec()
- self.get_success(self.store.user_add_threepid(
- user_id=user_id, medium="email", address="kermit@example.com",
- validated_at=now, added_at=now,
- ))
+ self.get_success(
+ self.store.user_add_threepid(
+ user_id=user_id,
+ medium="email",
+ address="kermit@example.com",
+ validated_at=now,
+ added_at=now,
+ )
+ )
request, channel = self.make_request(
- b"POST", "/_matrix/client/unstable/account_validity/send_mail",
+ b"POST",
+ "/_matrix/client/unstable/account_validity/send_mail",
access_token=tok,
)
self.render(request)
diff --git a/tests/rest/media/v1/test_base.py b/tests/rest/media/v1/test_base.py
index af8f74eb42..00688a7325 100644
--- a/tests/rest/media/v1/test_base.py
+++ b/tests/rest/media/v1/test_base.py
@@ -26,20 +26,14 @@ class GetFileNameFromHeadersTests(unittest.TestCase):
b'inline; filename="aze%20rty"': u"aze%20rty",
b'inline; filename="aze\"rty"': u'aze"rty',
b'inline; filename="azer;ty"': u"azer;ty",
-
b"inline; filename*=utf-8''foo%C2%A3bar": u"foo£bar",
}
def tests(self):
for hdr, expected in self.TEST_CASES.items():
- res = get_filename_from_headers(
- {
- b'Content-Disposition': [hdr],
- },
- )
+ res = get_filename_from_headers({b'Content-Disposition': [hdr]})
self.assertEqual(
- res, expected,
- "expected output for %s to be %s but was %s" % (
- hdr, expected, res,
- )
+ res,
+ expected,
+ "expected output for %s to be %s but was %s" % (hdr, expected, res),
)
diff --git a/tests/rest/media/v1/test_media_storage.py b/tests/rest/media/v1/test_media_storage.py
index ad5e9a612f..1069a44145 100644
--- a/tests/rest/media/v1/test_media_storage.py
+++ b/tests/rest/media/v1/test_media_storage.py
@@ -25,13 +25,11 @@ from six.moves.urllib import parse
from twisted.internet import defer, reactor
from twisted.internet.defer import Deferred
-from synapse.config.repository import MediaStorageProviderConfig
from synapse.rest.media.v1._base import FileInfo
from synapse.rest.media.v1.filepath import MediaFilePaths
from synapse.rest.media.v1.media_storage import MediaStorage
from synapse.rest.media.v1.storage_provider import FileStorageProviderBackend
from synapse.util.logcontext import make_deferred_yieldable
-from synapse.util.module_loader import load_module
from tests import unittest
@@ -120,12 +118,14 @@ class MediaRepoTests(unittest.HomeserverTestCase):
client.get_file = get_file
self.storage_path = self.mktemp()
+ self.media_store_path = self.mktemp()
os.mkdir(self.storage_path)
+ os.mkdir(self.media_store_path)
config = self.default_config()
- config.media_store_path = self.storage_path
- config.thumbnail_requirements = {}
- config.max_image_pixels = 2000000
+ config["media_store_path"] = self.media_store_path
+ config["thumbnail_requirements"] = {}
+ config["max_image_pixels"] = 2000000
provider_config = {
"module": "synapse.rest.media.v1.storage_provider.FileStorageProviderBackend",
@@ -134,12 +134,7 @@ class MediaRepoTests(unittest.HomeserverTestCase):
"store_remote": True,
"config": {"directory": self.storage_path},
}
-
- loaded = list(load_module(provider_config)) + [
- MediaStorageProviderConfig(False, False, False)
- ]
-
- config.media_storage_providers = [loaded]
+ config["media_storage_providers"] = [provider_config]
hs = self.setup_test_homeserver(config=config, http_client=client)
diff --git a/tests/rest/media/v1/test_url_preview.py b/tests/rest/media/v1/test_url_preview.py
index 650ce95a6f..1ab0f7293a 100644
--- a/tests/rest/media/v1/test_url_preview.py
+++ b/tests/rest/media/v1/test_url_preview.py
@@ -16,7 +16,6 @@
import os
import attr
-from netaddr import IPSet
from twisted.internet._resolver import HostResolution
from twisted.internet.address import IPv4Address, IPv6Address
@@ -25,9 +24,6 @@ from twisted.python.failure import Failure
from twisted.test.proto_helpers import AccumulatingProtocol
from twisted.web._newclient import ResponseDone
-from synapse.config.repository import MediaStorageProviderConfig
-from synapse.util.module_loader import load_module
-
from tests import unittest
from tests.server import FakeTransport
@@ -67,23 +63,23 @@ class URLPreviewTests(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
- self.storage_path = self.mktemp()
- os.mkdir(self.storage_path)
-
config = self.default_config()
- config.url_preview_enabled = True
- config.max_spider_size = 9999999
- config.url_preview_ip_range_blacklist = IPSet(
- (
- "192.168.1.1",
- "1.0.0.0/8",
- "3fff:ffff:ffff:ffff:ffff:ffff:ffff:ffff",
- "2001:800::/21",
- )
+ config["url_preview_enabled"] = True
+ config["max_spider_size"] = 9999999
+ config["url_preview_ip_range_blacklist"] = (
+ "192.168.1.1",
+ "1.0.0.0/8",
+ "3fff:ffff:ffff:ffff:ffff:ffff:ffff:ffff",
+ "2001:800::/21",
)
- config.url_preview_ip_range_whitelist = IPSet(("1.1.1.1",))
- config.url_preview_url_blacklist = []
- config.media_store_path = self.storage_path
+ config["url_preview_ip_range_whitelist"] = ("1.1.1.1",)
+ config["url_preview_url_blacklist"] = []
+
+ self.storage_path = self.mktemp()
+ self.media_store_path = self.mktemp()
+ os.mkdir(self.storage_path)
+ os.mkdir(self.media_store_path)
+ config["media_store_path"] = self.media_store_path
provider_config = {
"module": "synapse.rest.media.v1.storage_provider.FileStorageProviderBackend",
@@ -93,11 +89,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
"config": {"directory": self.storage_path},
}
- loaded = list(load_module(provider_config)) + [
- MediaStorageProviderConfig(False, False, False)
- ]
-
- config.media_storage_providers = [loaded]
+ config["media_storage_providers"] = [provider_config]
hs = self.setup_test_homeserver(config=config)
@@ -297,12 +289,12 @@ class URLPreviewTests(unittest.HomeserverTestCase):
# No requests made.
self.assertEqual(len(self.reactor.tcpClients), 0)
- self.assertEqual(channel.code, 403)
+ self.assertEqual(channel.code, 502)
self.assertEqual(
channel.json_body,
{
'errcode': 'M_UNKNOWN',
- 'error': 'IP address blocked by IP blacklist entry',
+ 'error': 'DNS resolution failure during URL preview generation',
},
)
@@ -318,12 +310,12 @@ class URLPreviewTests(unittest.HomeserverTestCase):
request.render(self.preview_url)
self.pump()
- self.assertEqual(channel.code, 403)
+ self.assertEqual(channel.code, 502)
self.assertEqual(
channel.json_body,
{
'errcode': 'M_UNKNOWN',
- 'error': 'IP address blocked by IP blacklist entry',
+ 'error': 'DNS resolution failure during URL preview generation',
},
)
@@ -339,7 +331,6 @@ class URLPreviewTests(unittest.HomeserverTestCase):
# No requests made.
self.assertEqual(len(self.reactor.tcpClients), 0)
- self.assertEqual(channel.code, 403)
self.assertEqual(
channel.json_body,
{
@@ -347,6 +338,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
'error': 'IP address blocked by IP blacklist entry',
},
)
+ self.assertEqual(channel.code, 403)
def test_blacklisted_ip_range_direct(self):
"""
@@ -414,12 +406,12 @@ class URLPreviewTests(unittest.HomeserverTestCase):
)
request.render(self.preview_url)
self.pump()
- self.assertEqual(channel.code, 403)
+ self.assertEqual(channel.code, 502)
self.assertEqual(
channel.json_body,
{
'errcode': 'M_UNKNOWN',
- 'error': 'IP address blocked by IP blacklist entry',
+ 'error': 'DNS resolution failure during URL preview generation',
},
)
@@ -439,12 +431,12 @@ class URLPreviewTests(unittest.HomeserverTestCase):
# No requests made.
self.assertEqual(len(self.reactor.tcpClients), 0)
- self.assertEqual(channel.code, 403)
+ self.assertEqual(channel.code, 502)
self.assertEqual(
channel.json_body,
{
'errcode': 'M_UNKNOWN',
- 'error': 'IP address blocked by IP blacklist entry',
+ 'error': 'DNS resolution failure during URL preview generation',
},
)
@@ -460,11 +452,11 @@ class URLPreviewTests(unittest.HomeserverTestCase):
request.render(self.preview_url)
self.pump()
- self.assertEqual(channel.code, 403)
+ self.assertEqual(channel.code, 502)
self.assertEqual(
channel.json_body,
{
'errcode': 'M_UNKNOWN',
- 'error': 'IP address blocked by IP blacklist entry',
+ 'error': 'DNS resolution failure during URL preview generation',
},
)
diff --git a/tests/rest/test_well_known.py b/tests/rest/test_well_known.py
index 8d8f03e005..b090bb974c 100644
--- a/tests/rest/test_well_known.py
+++ b/tests/rest/test_well_known.py
@@ -31,27 +31,24 @@ class WellKnownTests(unittest.HomeserverTestCase):
self.hs.config.default_identity_server = "https://testis"
request, channel = self.make_request(
- "GET",
- "/.well-known/matrix/client",
- shorthand=False,
+ "GET", "/.well-known/matrix/client", shorthand=False
)
self.render(request)
self.assertEqual(request.code, 200)
self.assertEqual(
- channel.json_body, {
+ channel.json_body,
+ {
"m.homeserver": {"base_url": "https://tesths"},
"m.identity_server": {"base_url": "https://testis"},
- }
+ },
)
def test_well_known_no_public_baseurl(self):
self.hs.config.public_baseurl = None
request, channel = self.make_request(
- "GET",
- "/.well-known/matrix/client",
- shorthand=False,
+ "GET", "/.well-known/matrix/client", shorthand=False
)
self.render(request)
diff --git a/tests/server.py b/tests/server.py
index 8f89f4a83d..c15a47f2a4 100644
--- a/tests/server.py
+++ b/tests/server.py
@@ -182,7 +182,8 @@ def make_request(
if federation_auth_origin is not None:
req.requestHeaders.addRawHeader(
- b"Authorization", b"X-Matrix origin=%s,key=,sig=" % (federation_auth_origin,)
+ b"Authorization",
+ b"X-Matrix origin=%s,key=,sig=" % (federation_auth_origin,),
)
if content:
@@ -226,6 +227,8 @@ class ThreadedMemoryReactorClock(MemoryReactorClock):
"""
def __init__(self):
+ self.threadpool = ThreadPool(self)
+
self._udp = []
lookups = self.lookups = {}
@@ -233,7 +236,7 @@ class ThreadedMemoryReactorClock(MemoryReactorClock):
class FakeResolver(object):
def getHostByName(self, name, timeout=None):
if name not in lookups:
- return fail(DNSLookupError("OH NO: unknown %s" % (name, )))
+ return fail(DNSLookupError("OH NO: unknown %s" % (name,)))
return succeed(lookups[name])
self.nameResolver = SimpleResolverComplexifier(FakeResolver())
@@ -254,6 +257,37 @@ class ThreadedMemoryReactorClock(MemoryReactorClock):
self.callLater(0, d.callback, True)
return d
+ def getThreadPool(self):
+ return self.threadpool
+
+
+class ThreadPool:
+ """
+ Threadless thread pool.
+ """
+
+ def __init__(self, reactor):
+ self._reactor = reactor
+
+ def start(self):
+ pass
+
+ def stop(self):
+ pass
+
+ def callInThreadWithCallback(self, onResult, function, *args, **kwargs):
+ def _(res):
+ if isinstance(res, Failure):
+ onResult(False, res)
+ else:
+ onResult(True, res)
+
+ d = Deferred()
+ d.addCallback(lambda x: function(*args, **kwargs))
+ d.addBoth(_)
+ self._reactor.callLater(0, d.callback, True)
+ return d
+
def setup_test_homeserver(cleanup_func, *args, **kwargs):
"""
@@ -289,36 +323,10 @@ def setup_test_homeserver(cleanup_func, *args, **kwargs):
**kwargs
)
- class ThreadPool:
- """
- Threadless thread pool.
- """
-
- def start(self):
- pass
-
- def stop(self):
- pass
-
- def callInThreadWithCallback(self, onResult, function, *args, **kwargs):
- def _(res):
- if isinstance(res, Failure):
- onResult(False, res)
- else:
- onResult(True, res)
-
- d = Deferred()
- d.addCallback(lambda x: function(*args, **kwargs))
- d.addBoth(_)
- clock._reactor.callLater(0, d.callback, True)
- return d
-
- clock.threadpool = ThreadPool()
-
if pool:
pool.runWithConnection = runWithConnection
pool.runInteraction = runInteraction
- pool.threadpool = ThreadPool()
+ pool.threadpool = ThreadPool(clock._reactor)
pool.running = True
return d
@@ -454,6 +462,6 @@ class FakeTransport(object):
logger.warning("Exception writing to protocol: %s", e)
return
- self.buffer = self.buffer[len(to_write):]
+ self.buffer = self.buffer[len(to_write) :]
if self.buffer and self.autoflush:
self._reactor.callLater(0.0, self.flush)
diff --git a/tests/server_notices/test_consent.py b/tests/server_notices/test_consent.py
index e0b4e0eb63..872039c8f1 100644
--- a/tests/server_notices/test_consent.py
+++ b/tests/server_notices/test_consent.py
@@ -12,6 +12,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
+
+import os
+
import synapse.rest.admin
from synapse.rest.client.v1 import login, room
from synapse.rest.client.v2_alpha import sync
@@ -30,20 +33,27 @@ class ConsentNoticesTests(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
+ tmpdir = self.mktemp()
+ os.mkdir(tmpdir)
self.consent_notice_message = "consent %(consent_uri)s"
config = self.default_config()
- config.user_consent_version = "1"
- config.user_consent_server_notice_content = {
- "msgtype": "m.text",
- "body": self.consent_notice_message,
+ config["user_consent"] = {
+ "version": "1",
+ "template_dir": tmpdir,
+ "server_notice_content": {
+ "msgtype": "m.text",
+ "body": self.consent_notice_message,
+ },
+ }
+ config["public_baseurl"] = "https://example.com/"
+ config["form_secret"] = "123abc"
+
+ config["server_notices"] = {
+ "system_mxid_localpart": "notices",
+ "system_mxid_display_name": "test display name",
+ "system_mxid_avatar_url": None,
+ "room_name": "Server Notices",
}
- config.public_baseurl = "https://example.com/"
- config.form_secret = "123abc"
-
- config.server_notices_mxid = "@notices:test"
- config.server_notices_mxid_display_name = "test display name"
- config.server_notices_mxid_avatar_url = None
- config.server_notices_room_name = "Server Notices"
hs = self.setup_test_homeserver(config=config)
diff --git a/tests/server_notices/test_resource_limits_server_notices.py b/tests/server_notices/test_resource_limits_server_notices.py
index be73e718c2..739ee59ce4 100644
--- a/tests/server_notices/test_resource_limits_server_notices.py
+++ b/tests/server_notices/test_resource_limits_server_notices.py
@@ -27,10 +27,14 @@ from tests import unittest
class TestResourceLimitsServerNotices(unittest.HomeserverTestCase):
-
def make_homeserver(self, reactor, clock):
hs_config = self.default_config("test")
- hs_config.server_notices_mxid = "@server:test"
+ hs_config["server_notices"] = {
+ "system_mxid_localpart": "server",
+ "system_mxid_display_name": "test display name",
+ "system_mxid_avatar_url": None,
+ "room_name": "Server Notices",
+ }
hs = self.setup_test_homeserver(config=hs_config, expire_access_token=True)
return hs
@@ -80,7 +84,7 @@ class TestResourceLimitsServerNotices(unittest.HomeserverTestCase):
self._send_notice.assert_not_called()
# Test when mau limiting disabled
self.hs.config.hs_disabled = False
- self.hs.limit_usage_by_mau = False
+ self.hs.config.limit_usage_by_mau = False
self.get_success(self._rlsn.maybe_send_server_notice_to_user(self.user_id))
self._send_notice.assert_not_called()
diff --git a/tests/state/test_v2.py b/tests/state/test_v2.py
index f448b01326..9c5311d916 100644
--- a/tests/state/test_v2.py
+++ b/tests/state/test_v2.py
@@ -50,6 +50,7 @@ class FakeEvent(object):
refer to events. The event_id has node_id as localpart and example.com
as domain.
"""
+
def __init__(self, id, sender, type, state_key, content):
self.node_id = id
self.event_id = EventID(id, "example.com").to_string()
@@ -142,24 +143,14 @@ INITIAL_EVENTS = [
content=MEMBERSHIP_CONTENT_JOIN,
),
FakeEvent(
- id="START",
- sender=ZARA,
- type=EventTypes.Message,
- state_key=None,
- content={},
+ id="START", sender=ZARA, type=EventTypes.Message, state_key=None, content={}
),
FakeEvent(
- id="END",
- sender=ZARA,
- type=EventTypes.Message,
- state_key=None,
- content={},
+ id="END", sender=ZARA, type=EventTypes.Message, state_key=None, content={}
),
]
-INITIAL_EDGES = [
- "START", "IMZ", "IMC", "IMB", "IJR", "IPOWER", "IMA", "CREATE",
-]
+INITIAL_EDGES = ["START", "IMZ", "IMC", "IMB", "IJR", "IPOWER", "IMA", "CREATE"]
class StateTestCase(unittest.TestCase):
@@ -170,12 +161,7 @@ class StateTestCase(unittest.TestCase):
sender=ALICE,
type=EventTypes.PowerLevels,
state_key="",
- content={
- "users": {
- ALICE: 100,
- BOB: 50,
- }
- },
+ content={"users": {ALICE: 100, BOB: 50}},
),
FakeEvent(
id="MA",
@@ -196,19 +182,11 @@ class StateTestCase(unittest.TestCase):
sender=BOB,
type=EventTypes.PowerLevels,
state_key='',
- content={
- "users": {
- ALICE: 100,
- BOB: 50,
- },
- },
+ content={"users": {ALICE: 100, BOB: 50}},
),
]
- edges = [
- ["END", "MB", "MA", "PA", "START"],
- ["END", "PB", "PA"],
- ]
+ edges = [["END", "MB", "MA", "PA", "START"], ["END", "PB", "PA"]]
expected_state_ids = ["PA", "MA", "MB"]
@@ -232,10 +210,7 @@ class StateTestCase(unittest.TestCase):
),
]
- edges = [
- ["END", "JR", "START"],
- ["END", "ME", "START"],
- ]
+ edges = [["END", "JR", "START"], ["END", "ME", "START"]]
expected_state_ids = ["JR"]
@@ -248,45 +223,25 @@ class StateTestCase(unittest.TestCase):
sender=ALICE,
type=EventTypes.PowerLevels,
state_key="",
- content={
- "users": {
- ALICE: 100,
- BOB: 50,
- }
- },
+ content={"users": {ALICE: 100, BOB: 50}},
),
FakeEvent(
id="PB",
sender=BOB,
type=EventTypes.PowerLevels,
state_key='',
- content={
- "users": {
- ALICE: 100,
- BOB: 50,
- CHARLIE: 50,
- },
- },
+ content={"users": {ALICE: 100, BOB: 50, CHARLIE: 50}},
),
FakeEvent(
id="PC",
sender=CHARLIE,
type=EventTypes.PowerLevels,
state_key='',
- content={
- "users": {
- ALICE: 100,
- BOB: 50,
- CHARLIE: 0,
- },
- },
+ content={"users": {ALICE: 100, BOB: 50, CHARLIE: 0}},
),
]
- edges = [
- ["END", "PC", "PB", "PA", "START"],
- ["END", "PA"],
- ]
+ edges = [["END", "PC", "PB", "PA", "START"], ["END", "PA"]]
expected_state_ids = ["PC"]
@@ -295,68 +250,38 @@ class StateTestCase(unittest.TestCase):
def test_topic_basic(self):
events = [
FakeEvent(
- id="T1",
- sender=ALICE,
- type=EventTypes.Topic,
- state_key="",
- content={},
+ id="T1", sender=ALICE, type=EventTypes.Topic, state_key="", content={}
),
FakeEvent(
id="PA1",
sender=ALICE,
type=EventTypes.PowerLevels,
state_key='',
- content={
- "users": {
- ALICE: 100,
- BOB: 50,
- },
- },
+ content={"users": {ALICE: 100, BOB: 50}},
),
FakeEvent(
- id="T2",
- sender=ALICE,
- type=EventTypes.Topic,
- state_key="",
- content={},
+ id="T2", sender=ALICE, type=EventTypes.Topic, state_key="", content={}
),
FakeEvent(
id="PA2",
sender=ALICE,
type=EventTypes.PowerLevels,
state_key='',
- content={
- "users": {
- ALICE: 100,
- BOB: 0,
- },
- },
+ content={"users": {ALICE: 100, BOB: 0}},
),
FakeEvent(
id="PB",
sender=BOB,
type=EventTypes.PowerLevels,
state_key='',
- content={
- "users": {
- ALICE: 100,
- BOB: 50,
- },
- },
+ content={"users": {ALICE: 100, BOB: 50}},
),
FakeEvent(
- id="T3",
- sender=BOB,
- type=EventTypes.Topic,
- state_key="",
- content={},
+ id="T3", sender=BOB, type=EventTypes.Topic, state_key="", content={}
),
]
- edges = [
- ["END", "PA2", "T2", "PA1", "T1", "START"],
- ["END", "T3", "PB", "PA1"],
- ]
+ edges = [["END", "PA2", "T2", "PA1", "T1", "START"], ["END", "T3", "PB", "PA1"]]
expected_state_ids = ["PA2", "T2"]
@@ -365,30 +290,17 @@ class StateTestCase(unittest.TestCase):
def test_topic_reset(self):
events = [
FakeEvent(
- id="T1",
- sender=ALICE,
- type=EventTypes.Topic,
- state_key="",
- content={},
+ id="T1", sender=ALICE, type=EventTypes.Topic, state_key="", content={}
),
FakeEvent(
id="PA",
sender=ALICE,
type=EventTypes.PowerLevels,
state_key='',
- content={
- "users": {
- ALICE: 100,
- BOB: 50,
- },
- },
+ content={"users": {ALICE: 100, BOB: 50}},
),
FakeEvent(
- id="T2",
- sender=BOB,
- type=EventTypes.Topic,
- state_key="",
- content={},
+ id="T2", sender=BOB, type=EventTypes.Topic, state_key="", content={}
),
FakeEvent(
id="MB",
@@ -399,10 +311,7 @@ class StateTestCase(unittest.TestCase):
),
]
- edges = [
- ["END", "MB", "T2", "PA", "T1", "START"],
- ["END", "T1"],
- ]
+ edges = [["END", "MB", "T2", "PA", "T1", "START"], ["END", "T1"]]
expected_state_ids = ["T1", "MB", "PA"]
@@ -411,61 +320,34 @@ class StateTestCase(unittest.TestCase):
def test_topic(self):
events = [
FakeEvent(
- id="T1",
- sender=ALICE,
- type=EventTypes.Topic,
- state_key="",
- content={},
+ id="T1", sender=ALICE, type=EventTypes.Topic, state_key="", content={}
),
FakeEvent(
id="PA1",
sender=ALICE,
type=EventTypes.PowerLevels,
state_key='',
- content={
- "users": {
- ALICE: 100,
- BOB: 50,
- },
- },
+ content={"users": {ALICE: 100, BOB: 50}},
),
FakeEvent(
- id="T2",
- sender=ALICE,
- type=EventTypes.Topic,
- state_key="",
- content={},
+ id="T2", sender=ALICE, type=EventTypes.Topic, state_key="", content={}
),
FakeEvent(
id="PA2",
sender=ALICE,
type=EventTypes.PowerLevels,
state_key='',
- content={
- "users": {
- ALICE: 100,
- BOB: 0,
- },
- },
+ content={"users": {ALICE: 100, BOB: 0}},
),
FakeEvent(
id="PB",
sender=BOB,
type=EventTypes.PowerLevels,
state_key='',
- content={
- "users": {
- ALICE: 100,
- BOB: 50,
- },
- },
+ content={"users": {ALICE: 100, BOB: 50}},
),
FakeEvent(
- id="T3",
- sender=BOB,
- type=EventTypes.Topic,
- state_key="",
- content={},
+ id="T3", sender=BOB, type=EventTypes.Topic, state_key="", content={}
),
FakeEvent(
id="MZ1",
@@ -475,11 +357,7 @@ class StateTestCase(unittest.TestCase):
content={},
),
FakeEvent(
- id="T4",
- sender=ALICE,
- type=EventTypes.Topic,
- state_key="",
- content={},
+ id="T4", sender=ALICE, type=EventTypes.Topic, state_key="", content={}
),
]
@@ -587,13 +465,7 @@ class StateTestCase(unittest.TestCase):
class LexicographicalTestCase(unittest.TestCase):
def test_simple(self):
- graph = {
- "l": {"o"},
- "m": {"n", "o"},
- "n": {"o"},
- "o": set(),
- "p": {"o"},
- }
+ graph = {"l": {"o"}, "m": {"n", "o"}, "n": {"o"}, "o": set(), "p": {"o"}}
res = list(lexicographical_topological_sort(graph, key=lambda x: x))
@@ -680,7 +552,13 @@ class SimpleParamStateTestCase(unittest.TestCase):
self.expected_combined_state = {
(e.type, e.state_key): e.event_id
- for e in [create_event, alice_member, join_rules, bob_member, charlie_member]
+ for e in [
+ create_event,
+ alice_member,
+ join_rules,
+ bob_member,
+ charlie_member,
+ ]
}
def test_event_map_none(self):
@@ -720,11 +598,7 @@ class TestStateResolutionStore(object):
Deferred[dict[str, FrozenEvent]]: Dict from event_id to event.
"""
- return {
- eid: self.event_map[eid]
- for eid in event_ids
- if eid in self.event_map
- }
+ return {eid: self.event_map[eid] for eid in event_ids if eid in self.event_map}
def get_auth_chain(self, event_ids):
"""Gets the full auth chain for a set of events (including rejected
diff --git a/tests/storage/test_background_update.py b/tests/storage/test_background_update.py
index 5568a607c7..fbb9302694 100644
--- a/tests/storage/test_background_update.py
+++ b/tests/storage/test_background_update.py
@@ -9,9 +9,7 @@ from tests.utils import setup_test_homeserver
class BackgroundUpdateTestCase(unittest.TestCase):
@defer.inlineCallbacks
def setUp(self):
- hs = yield setup_test_homeserver(
- self.addCleanup
- )
+ hs = yield setup_test_homeserver(self.addCleanup)
self.store = hs.get_datastore()
self.clock = hs.get_clock()
diff --git a/tests/storage/test_base.py b/tests/storage/test_base.py
index f18db8c384..c778de1f0c 100644
--- a/tests/storage/test_base.py
+++ b/tests/storage/test_base.py
@@ -56,10 +56,7 @@ class SQLBaseStoreTestCase(unittest.TestCase):
fake_engine = Mock(wraps=engine)
fake_engine.can_native_upsert = False
hs = TestHomeServer(
- "test",
- db_pool=self.db_pool,
- config=config,
- database_engine=fake_engine,
+ "test", db_pool=self.db_pool, config=config, database_engine=fake_engine
)
self.datastore = SQLBaseStore(None, hs)
diff --git a/tests/storage/test_end_to_end_keys.py b/tests/storage/test_end_to_end_keys.py
index 11fb8c0c19..cd2bcd4ca3 100644
--- a/tests/storage/test_end_to_end_keys.py
+++ b/tests/storage/test_end_to_end_keys.py
@@ -20,7 +20,6 @@ import tests.utils
class EndToEndKeyStoreTestCase(tests.unittest.TestCase):
-
@defer.inlineCallbacks
def setUp(self):
hs = yield tests.utils.setup_test_homeserver(self.addCleanup)
diff --git a/tests/storage/test_monthly_active_users.py b/tests/storage/test_monthly_active_users.py
index d6569a82bb..f458c03054 100644
--- a/tests/storage/test_monthly_active_users.py
+++ b/tests/storage/test_monthly_active_users.py
@@ -56,8 +56,7 @@ class MonthlyActiveUsersTestCase(unittest.HomeserverTestCase):
self.store.register(user_id=user1, token="123", password_hash=None)
self.store.register(user_id=user2, token="456", password_hash=None)
self.store.register(
- user_id=user3, token="789",
- password_hash=None, user_type=UserTypes.SUPPORT
+ user_id=user3, token="789", password_hash=None, user_type=UserTypes.SUPPORT
)
self.pump()
@@ -173,9 +172,7 @@ class MonthlyActiveUsersTestCase(unittest.HomeserverTestCase):
def test_populate_monthly_users_should_update(self):
self.store.upsert_monthly_active_user = Mock()
- self.store.is_trial_user = Mock(
- return_value=defer.succeed(False)
- )
+ self.store.is_trial_user = Mock(return_value=defer.succeed(False))
self.store.user_last_seen_monthly_active = Mock(
return_value=defer.succeed(None)
@@ -187,13 +184,9 @@ class MonthlyActiveUsersTestCase(unittest.HomeserverTestCase):
def test_populate_monthly_users_should_not_update(self):
self.store.upsert_monthly_active_user = Mock()
- self.store.is_trial_user = Mock(
- return_value=defer.succeed(False)
- )
+ self.store.is_trial_user = Mock(return_value=defer.succeed(False))
self.store.user_last_seen_monthly_active = Mock(
- return_value=defer.succeed(
- self.hs.get_clock().time_msec()
- )
+ return_value=defer.succeed(self.hs.get_clock().time_msec())
)
self.store.populate_monthly_active_users('user_id')
self.pump()
@@ -243,7 +236,7 @@ class MonthlyActiveUsersTestCase(unittest.HomeserverTestCase):
user_id=support_user_id,
token="123",
password_hash=None,
- user_type=UserTypes.SUPPORT
+ user_type=UserTypes.SUPPORT,
)
self.store.upsert_monthly_active_user(support_user_id)
diff --git a/tests/storage/test_redaction.py b/tests/storage/test_redaction.py
index 0fc5019e9f..4823d44dec 100644
--- a/tests/storage/test_redaction.py
+++ b/tests/storage/test_redaction.py
@@ -60,7 +60,7 @@ class RedactionTestCase(unittest.TestCase):
"state_key": user.to_string(),
"room_id": room.to_string(),
"content": content,
- }
+ },
)
event, context = yield self.event_creation_handler.create_new_client_event(
@@ -83,7 +83,7 @@ class RedactionTestCase(unittest.TestCase):
"state_key": user.to_string(),
"room_id": room.to_string(),
"content": {"body": body, "msgtype": u"message"},
- }
+ },
)
event, context = yield self.event_creation_handler.create_new_client_event(
@@ -105,7 +105,7 @@ class RedactionTestCase(unittest.TestCase):
"room_id": room.to_string(),
"content": {"reason": reason},
"redacts": event_id,
- }
+ },
)
event, context = yield self.event_creation_handler.create_new_client_event(
diff --git a/tests/storage/test_registration.py b/tests/storage/test_registration.py
index cb3cc4d2e5..c0e0155bb4 100644
--- a/tests/storage/test_registration.py
+++ b/tests/storage/test_registration.py
@@ -116,7 +116,7 @@ class RegistrationStoreTestCase(unittest.TestCase):
user_id=SUPPORT_USER,
token="456",
password_hash=None,
- user_type=UserTypes.SUPPORT
+ user_type=UserTypes.SUPPORT,
)
res = yield self.store.is_support_user(SUPPORT_USER)
self.assertTrue(res)
diff --git a/tests/storage/test_roommember.py b/tests/storage/test_roommember.py
index 063387863e..73ed943f5a 100644
--- a/tests/storage/test_roommember.py
+++ b/tests/storage/test_roommember.py
@@ -58,7 +58,7 @@ class RoomMemberStoreTestCase(unittest.TestCase):
"state_key": user.to_string(),
"room_id": room.to_string(),
"content": {"membership": membership},
- }
+ },
)
event, context = yield self.event_creation_handler.create_new_client_event(
diff --git a/tests/storage/test_state.py b/tests/storage/test_state.py
index 78e260a7fa..b6169436de 100644
--- a/tests/storage/test_state.py
+++ b/tests/storage/test_state.py
@@ -29,7 +29,6 @@ logger = logging.getLogger(__name__)
class StateStoreTestCase(tests.unittest.TestCase):
-
@defer.inlineCallbacks
def setUp(self):
hs = yield tests.utils.setup_test_homeserver(self.addCleanup)
@@ -57,7 +56,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
"state_key": state_key,
"room_id": room.to_string(),
"content": content,
- }
+ },
)
event, context = yield self.event_creation_handler.create_new_client_event(
@@ -83,15 +82,14 @@ class StateStoreTestCase(tests.unittest.TestCase):
self.room, self.u_alice, EventTypes.Name, '', {"name": "test room"}
)
- state_group_map = yield self.store.get_state_groups_ids(self.room, [e2.event_id])
+ state_group_map = yield self.store.get_state_groups_ids(
+ self.room, [e2.event_id]
+ )
self.assertEqual(len(state_group_map), 1)
state_map = list(state_group_map.values())[0]
self.assertDictEqual(
state_map,
- {
- (EventTypes.Create, ''): e1.event_id,
- (EventTypes.Name, ''): e2.event_id,
- },
+ {(EventTypes.Create, ''): e1.event_id, (EventTypes.Name, ''): e2.event_id},
)
@defer.inlineCallbacks
@@ -103,15 +101,11 @@ class StateStoreTestCase(tests.unittest.TestCase):
self.room, self.u_alice, EventTypes.Name, '', {"name": "test room"}
)
- state_group_map = yield self.store.get_state_groups(
- self.room, [e2.event_id])
+ state_group_map = yield self.store.get_state_groups(self.room, [e2.event_id])
self.assertEqual(len(state_group_map), 1)
state_list = list(state_group_map.values())[0]
- self.assertEqual(
- {ev.event_id for ev in state_list},
- {e1.event_id, e2.event_id},
- )
+ self.assertEqual({ev.event_id for ev in state_list}, {e1.event_id, e2.event_id})
@defer.inlineCallbacks
def test_get_state_for_event(self):
@@ -147,9 +141,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
)
# check we get the full state as of the final event
- state = yield self.store.get_state_for_event(
- e5.event_id,
- )
+ state = yield self.store.get_state_for_event(e5.event_id)
self.assertIsNotNone(e4)
@@ -194,7 +186,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
state_filter=StateFilter(
types={EventTypes.Member: {self.u_alice.to_string()}},
include_others=True,
- )
+ ),
)
self.assertStateMapEqual(
@@ -208,9 +200,9 @@ class StateStoreTestCase(tests.unittest.TestCase):
# check that we can grab everything except members
state = yield self.store.get_state_for_event(
- e5.event_id, state_filter=StateFilter(
- types={EventTypes.Member: set()},
- include_others=True,
+ e5.event_id,
+ state_filter=StateFilter(
+ types={EventTypes.Member: set()}, include_others=True
),
)
@@ -229,10 +221,10 @@ class StateStoreTestCase(tests.unittest.TestCase):
# test _get_state_for_group_using_cache correctly filters out members
# with types=[]
(state_dict, is_all) = yield self.store._get_state_for_group_using_cache(
- self.store._state_group_cache, group,
+ self.store._state_group_cache,
+ group,
state_filter=StateFilter(
- types={EventTypes.Member: set()},
- include_others=True,
+ types={EventTypes.Member: set()}, include_others=True
),
)
@@ -249,8 +241,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
self.store._state_group_members_cache,
group,
state_filter=StateFilter(
- types={EventTypes.Member: set()},
- include_others=True,
+ types={EventTypes.Member: set()}, include_others=True
),
)
@@ -263,8 +254,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
self.store._state_group_cache,
group,
state_filter=StateFilter(
- types={EventTypes.Member: None},
- include_others=True,
+ types={EventTypes.Member: None}, include_others=True
),
)
@@ -281,8 +271,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
self.store._state_group_members_cache,
group,
state_filter=StateFilter(
- types={EventTypes.Member: None},
- include_others=True,
+ types={EventTypes.Member: None}, include_others=True
),
)
@@ -302,8 +291,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
self.store._state_group_cache,
group,
state_filter=StateFilter(
- types={EventTypes.Member: {e5.state_key}},
- include_others=True,
+ types={EventTypes.Member: {e5.state_key}}, include_others=True
),
)
@@ -320,8 +308,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
self.store._state_group_members_cache,
group,
state_filter=StateFilter(
- types={EventTypes.Member: {e5.state_key}},
- include_others=True,
+ types={EventTypes.Member: {e5.state_key}}, include_others=True
),
)
@@ -334,8 +321,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
self.store._state_group_members_cache,
group,
state_filter=StateFilter(
- types={EventTypes.Member: {e5.state_key}},
- include_others=False,
+ types={EventTypes.Member: {e5.state_key}}, include_others=False
),
)
@@ -384,10 +370,10 @@ class StateStoreTestCase(tests.unittest.TestCase):
# with types=[]
room_id = self.room.to_string()
(state_dict, is_all) = yield self.store._get_state_for_group_using_cache(
- self.store._state_group_cache, group,
+ self.store._state_group_cache,
+ group,
state_filter=StateFilter(
- types={EventTypes.Member: set()},
- include_others=True,
+ types={EventTypes.Member: set()}, include_others=True
),
)
@@ -399,8 +385,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
self.store._state_group_members_cache,
group,
state_filter=StateFilter(
- types={EventTypes.Member: set()},
- include_others=True,
+ types={EventTypes.Member: set()}, include_others=True
),
)
@@ -413,8 +398,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
self.store._state_group_cache,
group,
state_filter=StateFilter(
- types={EventTypes.Member: None},
- include_others=True,
+ types={EventTypes.Member: None}, include_others=True
),
)
@@ -425,8 +409,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
self.store._state_group_members_cache,
group,
state_filter=StateFilter(
- types={EventTypes.Member: None},
- include_others=True,
+ types={EventTypes.Member: None}, include_others=True
),
)
@@ -445,8 +428,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
self.store._state_group_cache,
group,
state_filter=StateFilter(
- types={EventTypes.Member: {e5.state_key}},
- include_others=True,
+ types={EventTypes.Member: {e5.state_key}}, include_others=True
),
)
@@ -457,8 +439,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
self.store._state_group_members_cache,
group,
state_filter=StateFilter(
- types={EventTypes.Member: {e5.state_key}},
- include_others=True,
+ types={EventTypes.Member: {e5.state_key}}, include_others=True
),
)
@@ -471,8 +452,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
self.store._state_group_cache,
group,
state_filter=StateFilter(
- types={EventTypes.Member: {e5.state_key}},
- include_others=False,
+ types={EventTypes.Member: {e5.state_key}}, include_others=False
),
)
@@ -483,8 +463,7 @@ class StateStoreTestCase(tests.unittest.TestCase):
self.store._state_group_members_cache,
group,
state_filter=StateFilter(
- types={EventTypes.Member: {e5.state_key}},
- include_others=False,
+ types={EventTypes.Member: {e5.state_key}}, include_others=False
),
)
diff --git a/tests/storage/test_user_directory.py b/tests/storage/test_user_directory.py
index fd3361404f..d7d244ce97 100644
--- a/tests/storage/test_user_directory.py
+++ b/tests/storage/test_user_directory.py
@@ -36,9 +36,7 @@ class UserDirectoryStoreTestCase(unittest.TestCase):
yield self.store.update_profile_in_user_dir(ALICE, "alice", None)
yield self.store.update_profile_in_user_dir(BOB, "bob", None)
yield self.store.update_profile_in_user_dir(BOBBY, "bobby", None)
- yield self.store.add_users_in_public_rooms(
- "!room:id", (ALICE, BOB)
- )
+ yield self.store.add_users_in_public_rooms("!room:id", (ALICE, BOB))
@defer.inlineCallbacks
def test_search_user_dir(self):
diff --git a/tests/test_event_auth.py b/tests/test_event_auth.py
index 4c8f87e958..8b2741d277 100644
--- a/tests/test_event_auth.py
+++ b/tests/test_event_auth.py
@@ -37,7 +37,9 @@ class EventAuthTestCase(unittest.TestCase):
# creator should be able to send state
event_auth.check(
- RoomVersions.V1.identifier, _random_state_event(creator), auth_events,
+ RoomVersions.V1.identifier,
+ _random_state_event(creator),
+ auth_events,
do_sig_check=False,
)
@@ -82,7 +84,9 @@ class EventAuthTestCase(unittest.TestCase):
# king should be able to send state
event_auth.check(
- RoomVersions.V1.identifier, _random_state_event(king), auth_events,
+ RoomVersions.V1.identifier,
+ _random_state_event(king),
+ auth_events,
do_sig_check=False,
)
diff --git a/tests/test_federation.py b/tests/test_federation.py
index 1a5dc32c88..6a8339b561 100644
--- a/tests/test_federation.py
+++ b/tests/test_federation.py
@@ -1,4 +1,3 @@
-
from mock import Mock
from twisted.internet.defer import maybeDeferred, succeed
diff --git a/tests/test_mau.py b/tests/test_mau.py
index 00be1a8c21..1fbe0d51ff 100644
--- a/tests/test_mau.py
+++ b/tests/test_mau.py
@@ -33,9 +33,7 @@ class TestMauLimit(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
self.hs = self.setup_test_homeserver(
- "red",
- http_client=None,
- federation_client=Mock(),
+ "red", http_client=None, federation_client=Mock()
)
self.store = self.hs.get_datastore()
@@ -210,9 +208,7 @@ class TestMauLimit(unittest.HomeserverTestCase):
return access_token
def do_sync_for_user(self, token):
- request, channel = self.make_request(
- "GET", "/sync", access_token=token
- )
+ request, channel = self.make_request("GET", "/sync", access_token=token)
self.render(request)
if channel.code != 200:
diff --git a/tests/test_metrics.py b/tests/test_metrics.py
index 0ff6d0e283..2edbae5c6d 100644
--- a/tests/test_metrics.py
+++ b/tests/test_metrics.py
@@ -44,9 +44,7 @@ def get_sample_labels_value(sample):
class TestMauLimit(unittest.TestCase):
def test_basic(self):
gauge = InFlightGauge(
- "test1", "",
- labels=["test_label"],
- sub_metrics=["foo", "bar"],
+ "test1", "", labels=["test_label"], sub_metrics=["foo", "bar"]
)
def handle1(metrics):
@@ -59,37 +57,49 @@ class TestMauLimit(unittest.TestCase):
gauge.register(("key1",), handle1)
- self.assert_dict({
- "test1_total": {("key1",): 1},
- "test1_foo": {("key1",): 2},
- "test1_bar": {("key1",): 5},
- }, self.get_metrics_from_gauge(gauge))
+ self.assert_dict(
+ {
+ "test1_total": {("key1",): 1},
+ "test1_foo": {("key1",): 2},
+ "test1_bar": {("key1",): 5},
+ },
+ self.get_metrics_from_gauge(gauge),
+ )
gauge.unregister(("key1",), handle1)
- self.assert_dict({
- "test1_total": {("key1",): 0},
- "test1_foo": {("key1",): 0},
- "test1_bar": {("key1",): 0},
- }, self.get_metrics_from_gauge(gauge))
+ self.assert_dict(
+ {
+ "test1_total": {("key1",): 0},
+ "test1_foo": {("key1",): 0},
+ "test1_bar": {("key1",): 0},
+ },
+ self.get_metrics_from_gauge(gauge),
+ )
gauge.register(("key1",), handle1)
gauge.register(("key2",), handle2)
- self.assert_dict({
- "test1_total": {("key1",): 1, ("key2",): 1},
- "test1_foo": {("key1",): 2, ("key2",): 3},
- "test1_bar": {("key1",): 5, ("key2",): 7},
- }, self.get_metrics_from_gauge(gauge))
+ self.assert_dict(
+ {
+ "test1_total": {("key1",): 1, ("key2",): 1},
+ "test1_foo": {("key1",): 2, ("key2",): 3},
+ "test1_bar": {("key1",): 5, ("key2",): 7},
+ },
+ self.get_metrics_from_gauge(gauge),
+ )
gauge.unregister(("key2",), handle2)
gauge.register(("key1",), handle2)
- self.assert_dict({
- "test1_total": {("key1",): 2, ("key2",): 0},
- "test1_foo": {("key1",): 5, ("key2",): 0},
- "test1_bar": {("key1",): 7, ("key2",): 0},
- }, self.get_metrics_from_gauge(gauge))
+ self.assert_dict(
+ {
+ "test1_total": {("key1",): 2, ("key2",): 0},
+ "test1_foo": {("key1",): 5, ("key2",): 0},
+ "test1_bar": {("key1",): 7, ("key2",): 0},
+ },
+ self.get_metrics_from_gauge(gauge),
+ )
def get_metrics_from_gauge(self, gauge):
results = {}
diff --git a/tests/test_state.py b/tests/test_state.py
index 5bcc6aaa18..6491a7105a 100644
--- a/tests/test_state.py
+++ b/tests/test_state.py
@@ -168,7 +168,7 @@ class StateTestCase(unittest.TestCase):
"get_state_resolution_handler",
]
)
- hs.config = default_config("tesths")
+ hs.config = default_config("tesths", True)
hs.get_datastore.return_value = self.store
hs.get_state_handler.return_value = None
hs.get_clock.return_value = MockClock()
diff --git a/tests/test_terms_auth.py b/tests/test_terms_auth.py
index 0968e86a7b..f412985d2c 100644
--- a/tests/test_terms_auth.py
+++ b/tests/test_terms_auth.py
@@ -69,10 +69,10 @@ class TermsTestCase(unittest.HomeserverTestCase):
"name": "My Cool Privacy Policy",
"url": "https://example.org/_matrix/consent?v=1.0",
},
- "version": "1.0"
- },
- },
- },
+ "version": "1.0",
+ }
+ }
+ }
}
self.assertIsInstance(channel.json_body["params"], dict)
self.assertDictContainsSubset(channel.json_body["params"], expected_params)
diff --git a/tests/test_types.py b/tests/test_types.py
index d314a7ff58..d83c36559f 100644
--- a/tests/test_types.py
+++ b/tests/test_types.py
@@ -94,8 +94,7 @@ class MapUsernameTestCase(unittest.TestCase):
def testSymbols(self):
self.assertEqual(
- map_username_to_mxid_localpart("test=$?_1234"),
- "test=3d=24=3f_1234",
+ map_username_to_mxid_localpart("test=$?_1234"), "test=3d=24=3f_1234"
)
def testLeadingUnderscore(self):
@@ -105,6 +104,5 @@ class MapUsernameTestCase(unittest.TestCase):
# this should work with either a unicode or a bytes
self.assertEqual(map_username_to_mxid_localpart(u'têst'), "t=c3=aast")
self.assertEqual(
- map_username_to_mxid_localpart(u'têst'.encode('utf-8')),
- "t=c3=aast",
+ map_username_to_mxid_localpart(u'têst'.encode('utf-8')), "t=c3=aast"
)
diff --git a/tests/test_utils/logging_setup.py b/tests/test_utils/logging_setup.py
index d0bc8e2112..fde0baee8e 100644
--- a/tests/test_utils/logging_setup.py
+++ b/tests/test_utils/logging_setup.py
@@ -22,6 +22,7 @@ from synapse.util.logcontext import LoggingContextFilter
class ToTwistedHandler(logging.Handler):
"""logging handler which sends the logs to the twisted log"""
+
tx_log = twisted.logger.Logger()
def emit(self, record):
@@ -41,7 +42,8 @@ def setup_logging():
root_logger = logging.getLogger()
log_format = (
- "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s"
+ "%(asctime)s - %(name)s - %(lineno)d - "
+ "%(levelname)s - %(request)s - %(message)s"
)
handler = ToTwistedHandler()
diff --git a/tests/test_visibility.py b/tests/test_visibility.py
index 3bdb500514..6a180ddc32 100644
--- a/tests/test_visibility.py
+++ b/tests/test_visibility.py
@@ -132,7 +132,7 @@ class FilterEventsForServerTestCase(tests.unittest.TestCase):
"state_key": "",
"room_id": TEST_ROOM_ID,
"content": content,
- }
+ },
)
event, context = yield self.event_creation_handler.create_new_client_event(
@@ -153,7 +153,7 @@ class FilterEventsForServerTestCase(tests.unittest.TestCase):
"state_key": user_id,
"room_id": TEST_ROOM_ID,
"content": content,
- }
+ },
)
event, context = yield self.event_creation_handler.create_new_client_event(
@@ -174,7 +174,7 @@ class FilterEventsForServerTestCase(tests.unittest.TestCase):
"sender": user_id,
"room_id": TEST_ROOM_ID,
"content": content,
- }
+ },
)
event, context = yield self.event_creation_handler.create_new_client_event(
diff --git a/tests/unittest.py b/tests/unittest.py
index 029a88d770..26204470b1 100644
--- a/tests/unittest.py
+++ b/tests/unittest.py
@@ -27,6 +27,7 @@ import twisted.logger
from twisted.internet.defer import Deferred
from twisted.trial import unittest
+from synapse.config.homeserver import HomeServerConfig
from synapse.http.server import JsonResource
from synapse.http.site import SynapseRequest
from synapse.server import HomeServer
@@ -84,9 +85,8 @@ class TestCase(unittest.TestCase):
# all future bets are off.
if LoggingContext.current_context() is not LoggingContext.sentinel:
self.fail(
- "Test starting with non-sentinel logging context %s" % (
- LoggingContext.current_context(),
- )
+ "Test starting with non-sentinel logging context %s"
+ % (LoggingContext.current_context(),)
)
old_level = logging.getLogger().level
@@ -246,7 +246,7 @@ class HomeserverTestCase(TestCase):
def default_config(self, name="test"):
"""
- Get a default HomeServer config object.
+ Get a default HomeServer config dict.
Args:
name (str): The homeserver name/domain.
@@ -300,7 +300,13 @@ class HomeserverTestCase(TestCase):
content = json.dumps(content).encode('utf8')
return make_request(
- self.reactor, method, path, content, access_token, request, shorthand,
+ self.reactor,
+ method,
+ path,
+ content,
+ access_token,
+ request,
+ shorthand,
federation_auth_origin,
)
@@ -330,7 +336,14 @@ class HomeserverTestCase(TestCase):
kwargs.update(self._hs_args)
if "config" not in kwargs:
config = self.default_config()
- kwargs["config"] = config
+ else:
+ config = kwargs["config"]
+
+ # Parse the config from a config dict into a HomeServerConfig
+ config_obj = HomeServerConfig()
+ config_obj.parse_config_dict(config)
+ kwargs["config"] = config_obj
+
hs = setup_test_homeserver(self.addCleanup, *args, **kwargs)
stor = hs.get_datastore()
diff --git a/tests/util/test_async_utils.py b/tests/util/test_async_utils.py
index 84dd71e47a..bf85d3b8ec 100644
--- a/tests/util/test_async_utils.py
+++ b/tests/util/test_async_utils.py
@@ -42,10 +42,10 @@ class TimeoutDeferredTest(TestCase):
self.assertNoResult(timing_out_d)
self.assertFalse(cancelled[0], "deferred was cancelled prematurely")
- self.clock.pump((1.0, ))
+ self.clock.pump((1.0,))
self.assertTrue(cancelled[0], "deferred was not cancelled by timeout")
- self.failureResultOf(timing_out_d, defer.TimeoutError, )
+ self.failureResultOf(timing_out_d, defer.TimeoutError)
def test_times_out_when_canceller_throws(self):
"""Test that we have successfully worked around
@@ -59,9 +59,9 @@ class TimeoutDeferredTest(TestCase):
self.assertNoResult(timing_out_d)
- self.clock.pump((1.0, ))
+ self.clock.pump((1.0,))
- self.failureResultOf(timing_out_d, defer.TimeoutError, )
+ self.failureResultOf(timing_out_d, defer.TimeoutError)
def test_logcontext_is_preserved_on_cancellation(self):
blocking_was_cancelled = [False]
@@ -80,10 +80,10 @@ class TimeoutDeferredTest(TestCase):
# the errbacks should be run in the test logcontext
def errback(res, deferred_name):
self.assertIs(
- LoggingContext.current_context(), context_one,
- "errback %s run in unexpected logcontext %s" % (
- deferred_name, LoggingContext.current_context(),
- )
+ LoggingContext.current_context(),
+ context_one,
+ "errback %s run in unexpected logcontext %s"
+ % (deferred_name, LoggingContext.current_context()),
)
return res
@@ -94,11 +94,10 @@ class TimeoutDeferredTest(TestCase):
self.assertIs(LoggingContext.current_context(), LoggingContext.sentinel)
timing_out_d.addErrback(errback, "timingout")
- self.clock.pump((1.0, ))
+ self.clock.pump((1.0,))
self.assertTrue(
- blocking_was_cancelled[0],
- "non-completing deferred was not cancelled",
+ blocking_was_cancelled[0], "non-completing deferred was not cancelled"
)
- self.failureResultOf(timing_out_d, defer.TimeoutError, )
+ self.failureResultOf(timing_out_d, defer.TimeoutError)
self.assertIs(LoggingContext.current_context(), context_one)
diff --git a/tests/utils.py b/tests/utils.py
index cb75514851..f21074ae28 100644
--- a/tests/utils.py
+++ b/tests/utils.py
@@ -68,7 +68,9 @@ def setupdb():
# connect to postgres to create the base database.
db_conn = db_engine.module.connect(
- user=POSTGRES_USER, host=POSTGRES_HOST, password=POSTGRES_PASSWORD,
+ user=POSTGRES_USER,
+ host=POSTGRES_HOST,
+ password=POSTGRES_PASSWORD,
dbname=POSTGRES_DBNAME_FOR_INITIAL_CREATE,
)
db_conn.autocommit = True
@@ -94,7 +96,9 @@ def setupdb():
def _cleanup():
db_conn = db_engine.module.connect(
- user=POSTGRES_USER, host=POSTGRES_HOST, password=POSTGRES_PASSWORD,
+ user=POSTGRES_USER,
+ host=POSTGRES_HOST,
+ password=POSTGRES_PASSWORD,
dbname=POSTGRES_DBNAME_FOR_INITIAL_CREATE,
)
db_conn.autocommit = True
@@ -106,7 +110,7 @@ def setupdb():
atexit.register(_cleanup)
-def default_config(name):
+def default_config(name, parse=False):
"""
Create a reasonable test config.
"""
@@ -114,79 +118,72 @@ def default_config(name):
"server_name": name,
"media_store_path": "media",
"uploads_path": "uploads",
-
# the test signing key is just an arbitrary ed25519 key to keep the config
# parser happy
"signing_key": "ed25519 a_lPym qvioDNmfExFBRPgdTU+wtFYKq4JfwFRv7sYVgWvmgJg",
+ "event_cache_size": 1,
+ "enable_registration": True,
+ "enable_registration_captcha": False,
+ "macaroon_secret_key": "not even a little secret",
+ "expire_access_token": False,
+ "trusted_third_party_id_servers": [],
+ "room_invite_state_types": [],
+ "password_providers": [],
+ "worker_replication_url": "",
+ "worker_app": None,
+ "email_enable_notifs": False,
+ "block_non_admin_invites": False,
+ "federation_domain_whitelist": None,
+ "federation_rc_reject_limit": 10,
+ "federation_rc_sleep_limit": 10,
+ "federation_rc_sleep_delay": 100,
+ "federation_rc_concurrent": 10,
+ "filter_timeline_limit": 5000,
+ "user_directory_search_all_users": False,
+ "user_consent_server_notice_content": None,
+ "block_events_without_consent_error": None,
+ "user_consent_at_registration": False,
+ "user_consent_policy_name": "Privacy Policy",
+ "media_storage_providers": [],
+ "autocreate_auto_join_rooms": True,
+ "auto_join_rooms": [],
+ "limit_usage_by_mau": False,
+ "hs_disabled": False,
+ "hs_disabled_message": "",
+ "hs_disabled_limit_type": "",
+ "max_mau_value": 50,
+ "mau_trial_days": 0,
+ "mau_stats_only": False,
+ "mau_limits_reserved_threepids": [],
+ "admin_contact": None,
+ "rc_message": {"per_second": 10000, "burst_count": 10000},
+ "rc_registration": {"per_second": 10000, "burst_count": 10000},
+ "rc_login": {
+ "address": {"per_second": 10000, "burst_count": 10000},
+ "account": {"per_second": 10000, "burst_count": 10000},
+ "failed_attempts": {"per_second": 10000, "burst_count": 10000},
+ },
+ "saml2_enabled": False,
+ "public_baseurl": None,
+ "default_identity_server": None,
+ "key_refresh_interval": 24 * 60 * 60 * 1000,
+ "old_signing_keys": {},
+ "tls_fingerprints": [],
+ "use_frozen_dicts": False,
+ # We need a sane default_room_version, otherwise attempts to create
+ # rooms will fail.
+ "default_room_version": "1",
+ # disable user directory updates, because they get done in the
+ # background, which upsets the test runner.
+ "update_user_directory": False,
}
- config = HomeServerConfig()
- config.parse_config_dict(config_dict)
-
- # TODO: move this stuff into config_dict or get rid of it
- config.event_cache_size = 1
- config.enable_registration = True
- config.enable_registration_captcha = False
- config.macaroon_secret_key = "not even a little secret"
- config.expire_access_token = False
- config.trusted_third_party_id_servers = []
- config.room_invite_state_types = []
- config.password_providers = []
- config.worker_replication_url = ""
- config.worker_app = None
- config.email_enable_notifs = False
- config.block_non_admin_invites = False
- config.federation_domain_whitelist = None
- config.federation_rc_reject_limit = 10
- config.federation_rc_sleep_limit = 10
- config.federation_rc_sleep_delay = 100
- config.federation_rc_concurrent = 10
- config.filter_timeline_limit = 5000
- config.user_directory_search_all_users = False
- config.user_consent_server_notice_content = None
- config.block_events_without_consent_error = None
- config.user_consent_at_registration = False
- config.user_consent_policy_name = "Privacy Policy"
- config.media_storage_providers = []
- config.autocreate_auto_join_rooms = True
- config.auto_join_rooms = []
- config.limit_usage_by_mau = False
- config.hs_disabled = False
- config.hs_disabled_message = ""
- config.hs_disabled_limit_type = ""
- config.max_mau_value = 50
- config.mau_trial_days = 0
- config.mau_stats_only = False
- config.mau_limits_reserved_threepids = []
- config.admin_contact = None
- config.rc_messages_per_second = 10000
- config.rc_message_burst_count = 10000
- config.rc_registration.per_second = 10000
- config.rc_registration.burst_count = 10000
- config.rc_login_address.per_second = 10000
- config.rc_login_address.burst_count = 10000
- config.rc_login_account.per_second = 10000
- config.rc_login_account.burst_count = 10000
- config.rc_login_failed_attempts.per_second = 10000
- config.rc_login_failed_attempts.burst_count = 10000
- config.saml2_enabled = False
- config.public_baseurl = None
- config.default_identity_server = None
- config.key_refresh_interval = 24 * 60 * 60 * 1000
- config.old_signing_keys = {}
- config.tls_fingerprints = []
-
- config.use_frozen_dicts = False
-
- # we need a sane default_room_version, otherwise attempts to create rooms will
- # fail.
- config.default_room_version = "1"
-
- # disable user directory updates, because they get done in the
- # background, which upsets the test runner.
- config.update_user_directory = False
-
- return config
+ if parse:
+ config = HomeServerConfig()
+ config.parse_config_dict(config_dict)
+ return config
+
+ return config_dict
class TestHomeServer(HomeServer):
@@ -220,7 +217,7 @@ def setup_test_homeserver(
from twisted.internet import reactor
if config is None:
- config = default_config(name)
+ config = default_config(name, parse=True)
config.ldap_enabled = False
|