summary refs log tree commit diff
diff options
context:
space:
mode:
-rw-r--r--CHANGES.md2
-rw-r--r--changelog.d/6563.bugfix1
-rw-r--r--changelog.d/6621.doc1
-rw-r--r--changelog.d/6624.doc1
-rw-r--r--changelog.d/6654.bugfix1
-rw-r--r--changelog.d/6656.doc1
-rw-r--r--changelog.d/6657.bugfix1
-rw-r--r--changelog.d/6665.doc1
-rw-r--r--contrib/docker/docker-compose.yml2
-rw-r--r--docs/message_retention_policies.md191
-rw-r--r--docs/sample_config.yaml10
-rw-r--r--synapse/config/server.py10
-rw-r--r--synapse/handlers/groups_local.py16
-rw-r--r--synapse/rest/key/v2/remote_key_resource.py30
-rw-r--r--synapse/storage/data_stores/main/__init__.py4
-rw-r--r--tests/rest/admin/test_admin.py41
-rw-r--r--tests/rest/key/__init__.py0
-rw-r--r--tests/rest/key/v2/__init__.py0
-rw-r--r--tests/rest/key/v2/test_remote_key_resource.py135
19 files changed, 408 insertions, 40 deletions
diff --git a/CHANGES.md b/CHANGES.md
index df94f742c0..24da66c596 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -6,7 +6,7 @@ Features
 
 - Add v2 APIs for the `send_join` and `send_leave` federation endpoints (as described in [MSC1802](https://github.com/matrix-org/matrix-doc/pull/1802)). ([\#6349](https://github.com/matrix-org/synapse/issues/6349))
 - Add a develop script to generate full SQL schemas. ([\#6394](https://github.com/matrix-org/synapse/issues/6394))
-- Add custom SAML username mapping functinality through an external provider plugin. ([\#6411](https://github.com/matrix-org/synapse/issues/6411))
+- Add custom SAML username mapping functionality through an external provider plugin. ([\#6411](https://github.com/matrix-org/synapse/issues/6411))
 - Automatically delete empty groups/communities. ([\#6453](https://github.com/matrix-org/synapse/issues/6453))
 - Add option `limit_profile_requests_to_users_who_share_rooms` to prevent requirement of a local user sharing a room with another user to query their profile information. ([\#6523](https://github.com/matrix-org/synapse/issues/6523))
 - Add an `export_signing_key` script to extract the public part of signing keys when rotating them. ([\#6546](https://github.com/matrix-org/synapse/issues/6546))
diff --git a/changelog.d/6563.bugfix b/changelog.d/6563.bugfix
new file mode 100644
index 0000000000..3325fb1dcf
--- /dev/null
+++ b/changelog.d/6563.bugfix
@@ -0,0 +1 @@
+Fix GET request on /_synapse/admin/v2/users endpoint. Contributed by Awesome Technologies Innovationslabor GmbH.
\ No newline at end of file
diff --git a/changelog.d/6621.doc b/changelog.d/6621.doc
new file mode 100644
index 0000000000..6722ccfda3
--- /dev/null
+++ b/changelog.d/6621.doc
@@ -0,0 +1 @@
+Fix a typo in the configuration example for purge jobs in the sample configuration file.
diff --git a/changelog.d/6624.doc b/changelog.d/6624.doc
new file mode 100644
index 0000000000..bc9a022db2
--- /dev/null
+++ b/changelog.d/6624.doc
@@ -0,0 +1 @@
+Add complete documentation of the message retention policies support.
diff --git a/changelog.d/6654.bugfix b/changelog.d/6654.bugfix
new file mode 100644
index 0000000000..fed35252db
--- /dev/null
+++ b/changelog.d/6654.bugfix
@@ -0,0 +1 @@
+Correctly proxy HTTP errors due to API calls to remote group servers.
diff --git a/changelog.d/6656.doc b/changelog.d/6656.doc
new file mode 100644
index 0000000000..9f32da1a88
--- /dev/null
+++ b/changelog.d/6656.doc
@@ -0,0 +1 @@
+No more overriding the entire /etc folder of the container in docker-compose.yaml. Contributed by Fabian Meyer.
diff --git a/changelog.d/6657.bugfix b/changelog.d/6657.bugfix
new file mode 100644
index 0000000000..94e51a9896
--- /dev/null
+++ b/changelog.d/6657.bugfix
@@ -0,0 +1 @@
+Fix incorrect signing of responses from the key server implementation.
\ No newline at end of file
diff --git a/changelog.d/6665.doc b/changelog.d/6665.doc
new file mode 100644
index 0000000000..bc9a022db2
--- /dev/null
+++ b/changelog.d/6665.doc
@@ -0,0 +1 @@
+Add complete documentation of the message retention policies support.
diff --git a/contrib/docker/docker-compose.yml b/contrib/docker/docker-compose.yml
index 72c87054e5..2b044baf78 100644
--- a/contrib/docker/docker-compose.yml
+++ b/contrib/docker/docker-compose.yml
@@ -18,7 +18,7 @@ services:
       - SYNAPSE_CONFIG_PATH=/etc/homeserver.yaml
     volumes:
       # You may either store all the files in a local folder
-      - ./matrix-config:/etc
+      - ./matrix-config/homeserver.yaml:/etc/homeserver.yaml
       - ./files:/data
       # .. or you may split this between different storage points
       # - ./files:/data
diff --git a/docs/message_retention_policies.md b/docs/message_retention_policies.md
new file mode 100644
index 0000000000..4300809dfe
--- /dev/null
+++ b/docs/message_retention_policies.md
@@ -0,0 +1,191 @@
+# Message retention policies
+
+Synapse admins can enable support for message retention policies on
+their homeserver. Message retention policies exist at a room level,
+follow the semantics described in
+[MSC1763](https://github.com/matrix-org/matrix-doc/blob/matthew/msc1763/proposals/1763-configurable-retention-periods.md),
+and allow server and room admins to configure how long messages should
+be kept in a homeserver's database before being purged from it.
+**Please note that, as this feature isn't part of the Matrix
+specification yet, this implementation is to be considered as
+experimental.** 
+
+A message retention policy is mainly defined by its `max_lifetime`
+parameter, which defines how long a message can be kept around after
+it was sent to the room. If a room doesn't have a message retention
+policy, and there's no default one for a given server, then no message
+sent in that room is ever purged on that server.
+
+MSC1763 also specifies semantics for a `min_lifetime` parameter which
+defines the amount of time after which an event _can_ get purged (after
+it was sent to the room), but Synapse doesn't currently support it
+beyond registering it.
+
+Both `max_lifetime` and `min_lifetime` are optional parameters.
+
+Note that message retention policies don't apply to state events.
+
+Once an event reaches its expiry date (defined as the time it was sent
+plus the value for `max_lifetime` in the room), two things happen:
+
+* Synapse stops serving the event to clients via any endpoint.
+* The message gets picked up by the next purge job (see the "Purge jobs"
+  section) and is removed from Synapse's database.
+
+Since purge jobs don't run continuously, this means that an event might
+stay in a server's database for longer than the value for `max_lifetime`
+in the room would allow, though hidden from clients.
+
+Similarly, if a server (with support for message retention policies
+enabled) receives from another server an event that should have been
+purged according to its room's policy, then the receiving server will
+process and store that event until it's picked up by the next purge job,
+though it will always hide it from clients.
+
+
+## Server configuration
+
+Support for this feature can be enabled and configured in the
+`retention` section of the Synapse configuration file (see the
+[sample file](https://github.com/matrix-org/synapse/blob/v1.7.3/docs/sample_config.yaml#L332-L393)).
+
+To enable support for message retention policies, set the setting
+`enabled` in this section to `true`.
+
+
+### Default policy
+
+A default message retention policy is a policy defined in Synapse's
+configuration that is used by Synapse for every room that doesn't have a
+message retention policy configured in its state. This allows server
+admins to ensure that messages are never kept indefinitely in a server's
+database. 
+
+A default policy can be defined as such, in the `retention` section of
+the configuration file:
+
+```yaml
+  default_policy:
+    min_lifetime: 1d
+    max_lifetime: 1y
+```
+
+Here, `min_lifetime` and `max_lifetime` have the same meaning and level
+of support as previously described. They can be expressed either as a
+duration (using the units `s` (seconds), `m` (minutes), `h` (hours),
+`d` (days), `w` (weeks) and `y` (years)) or as a number of milliseconds.
+
+
+### Purge jobs
+
+Purge jobs are the jobs that Synapse runs in the background to purge
+expired events from the database. They are only run if support for
+message retention policies is enabled in the server's configuration. If
+no configuration for purge jobs is configured by the server admin,
+Synapse will use a default configuration, which is described in the
+[sample configuration file](https://github.com/matrix-org/synapse/blob/master/docs/sample_config.yaml#L332-L393).
+
+Some server admins might want a finer control on when events are removed
+depending on an event's room's policy. This can be done by setting the
+`purge_jobs` sub-section in the `retention` section of the configuration
+file. An example of such configuration could be:
+
+```yaml
+  purge_jobs:
+    - longest_max_lifetime: 3d
+      interval: 12h
+    - shortest_max_lifetime: 3d
+      longest_max_lifetime: 1w
+      interval: 1d
+    - shortest_max_lifetime: 1w
+      interval: 2d
+```
+
+In this example, we define three jobs:
+
+* one that runs twice a day (every 12 hours) and purges events in rooms
+  which policy's `max_lifetime` is lower or equal to 3 days.
+* one that runs once a day and purges events in rooms which policy's
+  `max_lifetime` is between 3 days and a week.
+* one that runs once every 2 days and purges events in rooms which
+  policy's `max_lifetime` is greater than a week.
+
+Note that this example is tailored to show different configurations and
+features slightly more jobs than it's probably necessary (in practice, a
+server admin would probably consider it better to replace the two last
+jobs with one that runs once a day and handles rooms which which
+policy's `max_lifetime` is greater than 3 days).
+
+Keep in mind, when configuring these jobs, that a purge job can become
+quite heavy on the server if it targets many rooms, therefore prefer
+having jobs with a low interval that target a limited set of rooms. Also
+make sure to include a job with no minimum and one with no maximum to
+make sure your configuration handles every policy.
+
+As previously mentioned in this documentation, while a purge job that
+runs e.g. every day means that an expired event might stay in the
+database for up to a day after its expiry, Synapse hides expired events
+from clients as soon as they expire, so the event is not visible to
+local users between its expiry date and the moment it gets purged from
+the server's database.
+
+
+### Lifetime limits
+
+**Note: this feature is mainly useful within a closed federation or on
+servers that don't federate, because there currently is no way to
+enforce these limits in an open federation.**
+
+Server admins can restrict the values their local users are allowed to
+use for both `min_lifetime` and `max_lifetime`. These limits can be
+defined as such in the `retention` section of the configuration file:
+
+```yaml
+  allowed_lifetime_min: 1d
+  allowed_lifetime_max: 1y
+```
+
+Here, `allowed_lifetime_min` is the lowest value a local user can set
+for both `min_lifetime` and `max_lifetime`, and `allowed_lifetime_max`
+is the highest value. Both parameters are optional (e.g. setting
+`allowed_lifetime_min` but not `allowed_lifetime_max` only enforces a
+minimum and no maximum).
+
+Like other settings in this section, these parameters can be expressed
+either as a duration or as a number of milliseconds.
+
+
+## Room configuration
+
+To configure a room's message retention policy, a room's admin or
+moderator needs to send a state event in that room with the type
+`m.room.retention` and the following content:
+
+```json
+{
+    "max_lifetime": ...
+}
+```
+
+In this event's content, the `max_lifetime` parameter has the same
+meaning as previously described, and needs to be expressed in
+milliseconds. The event's content can also include a `min_lifetime`
+parameter, which has the same meaning and limited support as previously
+described.
+
+Note that over every server in the room, only the ones with support for
+message retention policies will actually remove expired events. This
+support is currently not enabled by default in Synapse.
+
+
+## Note on reclaiming disk space
+
+While purge jobs actually delete data from the database, the disk space
+used by the database might not decrease immediately on the database's
+host. However, even though the database engine won't free up the disk
+space, it will start writing new data into where the purged data was.
+
+If you want to reclaim the freed disk space anyway and return it to the
+operating system, the server admin needs to run `VACUUM FULL;` (or
+`VACUUM;` for SQLite databases) on Synapse's database (see the related
+[PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-vacuum.html)).
diff --git a/docs/sample_config.yaml b/docs/sample_config.yaml
index fad5f968b5..0a2505e7bb 100644
--- a/docs/sample_config.yaml
+++ b/docs/sample_config.yaml
@@ -387,17 +387,17 @@ retention:
   #
   # The rationale for this per-job configuration is that some rooms might have a
   # retention policy with a low 'max_lifetime', where history needs to be purged
-  # of outdated messages on a very frequent basis (e.g. every 5min), but not want
-  # that purge to be performed by a job that's iterating over every room it knows,
-  # which would be quite heavy on the server.
+  # of outdated messages on a more frequent basis than for the rest of the rooms
+  # (e.g. every 12h), but not want that purge to be performed by a job that's
+  # iterating over every room it knows, which could be heavy on the server.
   #
   #purge_jobs:
   #  - shortest_max_lifetime: 1d
   #    longest_max_lifetime: 3d
-  #    interval: 5m:
+  #    interval: 12h
   #  - shortest_max_lifetime: 3d
   #    longest_max_lifetime: 1y
-  #    interval: 24h
+  #    interval: 1d
 
 
 ## TLS ##
diff --git a/synapse/config/server.py b/synapse/config/server.py
index 38f6ff9edc..9ac112233b 100644
--- a/synapse/config/server.py
+++ b/synapse/config/server.py
@@ -948,17 +948,17 @@ class ServerConfig(Config):
           #
           # The rationale for this per-job configuration is that some rooms might have a
           # retention policy with a low 'max_lifetime', where history needs to be purged
-          # of outdated messages on a very frequent basis (e.g. every 5min), but not want
-          # that purge to be performed by a job that's iterating over every room it knows,
-          # which would be quite heavy on the server.
+          # of outdated messages on a more frequent basis than for the rest of the rooms
+          # (e.g. every 12h), but not want that purge to be performed by a job that's
+          # iterating over every room it knows, which could be heavy on the server.
           #
           #purge_jobs:
           #  - shortest_max_lifetime: 1d
           #    longest_max_lifetime: 3d
-          #    interval: 5m:
+          #    interval: 12h
           #  - shortest_max_lifetime: 3d
           #    longest_max_lifetime: 1y
-          #    interval: 24h
+          #    interval: 1d
         """
             % locals()
         )
diff --git a/synapse/handlers/groups_local.py b/synapse/handlers/groups_local.py
index 92fecbfc44..319565510f 100644
--- a/synapse/handlers/groups_local.py
+++ b/synapse/handlers/groups_local.py
@@ -130,6 +130,8 @@ class GroupsLocalHandler(object):
                 res = yield self.transport_client.get_group_summary(
                     get_domain_from_id(group_id), group_id, requester_user_id
                 )
+            except HttpResponseException as e:
+                raise e.to_synapse_error()
             except RequestSendFailed:
                 raise SynapseError(502, "Failed to contact group server")
 
@@ -190,6 +192,8 @@ class GroupsLocalHandler(object):
                 res = yield self.transport_client.create_group(
                     get_domain_from_id(group_id), group_id, user_id, content
                 )
+            except HttpResponseException as e:
+                raise e.to_synapse_error()
             except RequestSendFailed:
                 raise SynapseError(502, "Failed to contact group server")
 
@@ -231,6 +235,8 @@ class GroupsLocalHandler(object):
             res = yield self.transport_client.get_users_in_group(
                 get_domain_from_id(group_id), group_id, requester_user_id
             )
+        except HttpResponseException as e:
+            raise e.to_synapse_error()
         except RequestSendFailed:
             raise SynapseError(502, "Failed to contact group server")
 
@@ -271,6 +277,8 @@ class GroupsLocalHandler(object):
                 res = yield self.transport_client.join_group(
                     get_domain_from_id(group_id), group_id, user_id, content
                 )
+            except HttpResponseException as e:
+                raise e.to_synapse_error()
             except RequestSendFailed:
                 raise SynapseError(502, "Failed to contact group server")
 
@@ -315,6 +323,8 @@ class GroupsLocalHandler(object):
                 res = yield self.transport_client.accept_group_invite(
                     get_domain_from_id(group_id), group_id, user_id, content
                 )
+            except HttpResponseException as e:
+                raise e.to_synapse_error()
             except RequestSendFailed:
                 raise SynapseError(502, "Failed to contact group server")
 
@@ -361,6 +371,8 @@ class GroupsLocalHandler(object):
                     requester_user_id,
                     content,
                 )
+            except HttpResponseException as e:
+                raise e.to_synapse_error()
             except RequestSendFailed:
                 raise SynapseError(502, "Failed to contact group server")
 
@@ -424,6 +436,8 @@ class GroupsLocalHandler(object):
                     user_id,
                     content,
                 )
+            except HttpResponseException as e:
+                raise e.to_synapse_error()
             except RequestSendFailed:
                 raise SynapseError(502, "Failed to contact group server")
 
@@ -460,6 +474,8 @@ class GroupsLocalHandler(object):
                 bulk_result = yield self.transport_client.bulk_get_publicised_groups(
                     get_domain_from_id(user_id), [user_id]
                 )
+            except HttpResponseException as e:
+                raise e.to_synapse_error()
             except RequestSendFailed:
                 raise SynapseError(502, "Failed to contact group server")
 
diff --git a/synapse/rest/key/v2/remote_key_resource.py b/synapse/rest/key/v2/remote_key_resource.py
index bf5e0eb844..e7fc3f0431 100644
--- a/synapse/rest/key/v2/remote_key_resource.py
+++ b/synapse/rest/key/v2/remote_key_resource.py
@@ -15,7 +15,6 @@
 import logging
 
 from canonicaljson import encode_canonical_json, json
-from signedjson.key import encode_verify_key_base64
 from signedjson.sign import sign_json
 
 from twisted.internet import defer
@@ -217,28 +216,15 @@ class RemoteKey(DirectServeResource):
         if cache_misses and query_remote_on_cache_miss:
             yield self.fetcher.get_keys(cache_misses)
             yield self.query_keys(request, query, query_remote_on_cache_miss=False)
-            return
-
-        signed_keys = []
-        for key_json in json_results:
-            key_json = json.loads(key_json)
-
-            # backwards-compatibility hack for #6596: if the requested key belongs
-            # to us, make sure that all of the signing keys appear in the
-            # "verify_keys" section.
-            if key_json["server_name"] == self.config.server_name:
-                verify_keys = key_json["verify_keys"]
+        else:
+            signed_keys = []
+            for key_json in json_results:
+                key_json = json.loads(key_json)
                 for signing_key in self.config.key_server_signing_keys:
-                    key_id = "%s:%s" % (signing_key.alg, signing_key.version)
-                    verify_keys[key_id] = {
-                        "key": encode_verify_key_base64(signing_key.verify_key)
-                    }
-
-            for signing_key in self.config.key_server_signing_keys:
-                key_json = sign_json(key_json, self.config.server_name, signing_key)
+                    key_json = sign_json(key_json, self.config.server_name, signing_key)
 
-            signed_keys.append(key_json)
+                signed_keys.append(key_json)
 
-        results = {"server_keys": signed_keys}
+            results = {"server_keys": signed_keys}
 
-        respond_with_json_bytes(request, 200, encode_canonical_json(results))
+            respond_with_json_bytes(request, 200, encode_canonical_json(results))
diff --git a/synapse/storage/data_stores/main/__init__.py b/synapse/storage/data_stores/main/__init__.py
index c577c0df5f..2700cca822 100644
--- a/synapse/storage/data_stores/main/__init__.py
+++ b/synapse/storage/data_stores/main/__init__.py
@@ -526,9 +526,9 @@ class DataStore(
 
         attr_filter = {}
         if not guests:
-            attr_filter["is_guest"] = False
+            attr_filter["is_guest"] = 0
         if not deactivated:
-            attr_filter["deactivated"] = False
+            attr_filter["deactivated"] = 0
 
         return self.db.simple_select_list_paginate(
             desc="get_users_paginate",
diff --git a/tests/rest/admin/test_admin.py b/tests/rest/admin/test_admin.py
index 0ed2594381..325bd6a608 100644
--- a/tests/rest/admin/test_admin.py
+++ b/tests/rest/admin/test_admin.py
@@ -341,6 +341,47 @@ class UserRegisterTestCase(unittest.HomeserverTestCase):
         self.assertEqual("Invalid user type", channel.json_body["error"])
 
 
+class UsersListTestCase(unittest.HomeserverTestCase):
+
+    servlets = [
+        synapse.rest.admin.register_servlets,
+        login.register_servlets,
+    ]
+    url = "/_synapse/admin/v2/users"
+
+    def prepare(self, reactor, clock, hs):
+        self.admin_user = self.register_user("admin", "pass", admin=True)
+        self.admin_user_tok = self.login("admin", "pass")
+
+        self.register_user("user1", "pass1", admin=False)
+        self.register_user("user2", "pass2", admin=False)
+
+    def test_no_auth(self):
+        """
+        Try to list users without authentication.
+        """
+        request, channel = self.make_request("GET", self.url, b"{}")
+        self.render(request)
+
+        self.assertEqual(401, int(channel.result["code"]), msg=channel.result["body"])
+        self.assertEqual("M_MISSING_TOKEN", channel.json_body["errcode"])
+
+    def test_all_users(self):
+        """
+        List all users, including deactivated users.
+        """
+        request, channel = self.make_request(
+            "GET",
+            self.url + "?deactivated=true",
+            b"{}",
+            access_token=self.admin_user_tok,
+        )
+        self.render(request)
+
+        self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
+        self.assertEqual(3, len(channel.json_body["users"]))
+
+
 class ShutdownRoomTestCase(unittest.HomeserverTestCase):
     servlets = [
         synapse.rest.admin.register_servlets_for_client_rest_resource,
diff --git a/tests/rest/key/__init__.py b/tests/rest/key/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
--- /dev/null
+++ b/tests/rest/key/__init__.py
diff --git a/tests/rest/key/v2/__init__.py b/tests/rest/key/v2/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
--- /dev/null
+++ b/tests/rest/key/v2/__init__.py
diff --git a/tests/rest/key/v2/test_remote_key_resource.py b/tests/rest/key/v2/test_remote_key_resource.py
index d8246b4e78..6776a56cad 100644
--- a/tests/rest/key/v2/test_remote_key_resource.py
+++ b/tests/rest/key/v2/test_remote_key_resource.py
@@ -13,25 +13,30 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 import urllib.parse
-from io import BytesIO
+from io import BytesIO, StringIO
 
 from mock import Mock
 
 import signedjson.key
+from canonicaljson import encode_canonical_json
 from nacl.signing import SigningKey
 from signedjson.sign import sign_json
 
 from twisted.web.resource import NoResource
 
+from synapse.crypto.keyring import PerspectivesKeyFetcher
 from synapse.http.site import SynapseRequest
 from synapse.rest.key.v2 import KeyApiV2Resource
+from synapse.storage.keys import FetchKeyResult
 from synapse.util.httpresourcetree import create_resource_tree
+from synapse.util.stringutils import random_string
 
 from tests import unittest
 from tests.server import FakeChannel, wait_until_result
+from tests.utils import default_config
 
 
-class RemoteKeyResourceTestCase(unittest.HomeserverTestCase):
+class BaseRemoteKeyResourceTestCase(unittest.HomeserverTestCase):
     def make_homeserver(self, reactor, clock):
         self.http_client = Mock()
         return self.setup_test_homeserver(http_client=self.http_client)
@@ -73,6 +78,8 @@ class RemoteKeyResourceTestCase(unittest.HomeserverTestCase):
 
         self.http_client.get_json.side_effect = get_json
 
+
+class RemoteKeyResourceTestCase(BaseRemoteKeyResourceTestCase):
     def make_notary_request(self, server_name: str, key_id: str) -> dict:
         """Send a GET request to the test server requesting the given key.
 
@@ -125,6 +132,126 @@ class RemoteKeyResourceTestCase(unittest.HomeserverTestCase):
         oursigs = sigs[self.hs.hostname]
         self.assertEqual(len(oursigs), 2)
 
-        # and both keys should be present in the verify_keys section
+        # the requested key should be present in the verify_keys section
         self.assertIn("ed25519:ver1", keys[0]["verify_keys"])
-        self.assertIn("ed25519:a_lPym", keys[0]["verify_keys"])
+
+
+class EndToEndPerspectivesTests(BaseRemoteKeyResourceTestCase):
+    """End-to-end tests of the perspectives fetch case
+
+    The idea here is to actually wire up a PerspectivesKeyFetcher to the notary
+    endpoint, to check that the two implementations are compatible.
+    """
+
+    def default_config(self, *args, **kwargs):
+        config = super().default_config(*args, **kwargs)
+
+        # replace the signing key with our own
+        self.hs_signing_key = signedjson.key.generate_signing_key("kssk")
+        strm = StringIO()
+        signedjson.key.write_signing_keys(strm, [self.hs_signing_key])
+        config["signing_key"] = strm.getvalue()
+
+        return config
+
+    def prepare(self, reactor, clock, homeserver):
+        # make a second homeserver, configured to use the first one as a key notary
+        self.http_client2 = Mock()
+        config = default_config(name="keyclient")
+        config["trusted_key_servers"] = [
+            {
+                "server_name": self.hs.hostname,
+                "verify_keys": {
+                    "ed25519:%s"
+                    % (
+                        self.hs_signing_key.version,
+                    ): signedjson.key.encode_verify_key_base64(
+                        self.hs_signing_key.verify_key
+                    )
+                },
+            }
+        ]
+        self.hs2 = self.setup_test_homeserver(
+            http_client=self.http_client2, config=config
+        )
+
+        # wire up outbound POST /key/v2/query requests from hs2 so that they
+        # will be forwarded to hs1
+        def post_json(destination, path, data):
+            self.assertEqual(destination, self.hs.hostname)
+            self.assertEqual(
+                path, "/_matrix/key/v2/query",
+            )
+
+            channel = FakeChannel(self.site, self.reactor)
+            req = SynapseRequest(channel)
+            req.content = BytesIO(encode_canonical_json(data))
+
+            req.requestReceived(
+                b"POST", path.encode("utf-8"), b"1.1",
+            )
+            wait_until_result(self.reactor, req)
+            self.assertEqual(channel.code, 200)
+            resp = channel.json_body
+            return resp
+
+        self.http_client2.post_json.side_effect = post_json
+
+    def test_get_key(self):
+        """Fetch a key belonging to a random server"""
+        # make up a key to be fetched.
+        testkey = signedjson.key.generate_signing_key("abc")
+
+        # we expect hs1 to make a regular key request to the target server
+        self.expect_outgoing_key_request("targetserver", testkey)
+        keyid = "ed25519:%s" % (testkey.version,)
+
+        fetcher = PerspectivesKeyFetcher(self.hs2)
+        d = fetcher.get_keys({"targetserver": {keyid: 1000}})
+        res = self.get_success(d)
+        self.assertIn("targetserver", res)
+        keyres = res["targetserver"][keyid]
+        assert isinstance(keyres, FetchKeyResult)
+        self.assertEqual(
+            signedjson.key.encode_verify_key_base64(keyres.verify_key),
+            signedjson.key.encode_verify_key_base64(testkey.verify_key),
+        )
+
+    def test_get_notary_key(self):
+        """Fetch a key belonging to the notary server"""
+        # make up a key to be fetched. We randomise the keyid to try to get it to
+        # appear before the key server signing key sometimes (otherwise we bail out
+        # before fetching its signature)
+        testkey = signedjson.key.generate_signing_key(random_string(5))
+
+        # we expect hs1 to make a regular key request to itself
+        self.expect_outgoing_key_request(self.hs.hostname, testkey)
+        keyid = "ed25519:%s" % (testkey.version,)
+
+        fetcher = PerspectivesKeyFetcher(self.hs2)
+        d = fetcher.get_keys({self.hs.hostname: {keyid: 1000}})
+        res = self.get_success(d)
+        self.assertIn(self.hs.hostname, res)
+        keyres = res[self.hs.hostname][keyid]
+        assert isinstance(keyres, FetchKeyResult)
+        self.assertEqual(
+            signedjson.key.encode_verify_key_base64(keyres.verify_key),
+            signedjson.key.encode_verify_key_base64(testkey.verify_key),
+        )
+
+    def test_get_notary_keyserver_key(self):
+        """Fetch the notary's keyserver key"""
+        # we expect hs1 to make a regular key request to itself
+        self.expect_outgoing_key_request(self.hs.hostname, self.hs_signing_key)
+        keyid = "ed25519:%s" % (self.hs_signing_key.version,)
+
+        fetcher = PerspectivesKeyFetcher(self.hs2)
+        d = fetcher.get_keys({self.hs.hostname: {keyid: 1000}})
+        res = self.get_success(d)
+        self.assertIn(self.hs.hostname, res)
+        keyres = res[self.hs.hostname][keyid]
+        assert isinstance(keyres, FetchKeyResult)
+        self.assertEqual(
+            signedjson.key.encode_verify_key_base64(keyres.verify_key),
+            signedjson.key.encode_verify_key_base64(self.hs_signing_key.verify_key),
+        )