diff --git a/CHANGES.md b/CHANGES.md
index d06b8c8ad3..c8840e9c74 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -19,7 +19,7 @@ Features
- Add v2 APIs for the `send_join` and `send_leave` federation endpoints (as described in [MSC1802](https://github.com/matrix-org/matrix-doc/pull/1802)). ([\#6349](https://github.com/matrix-org/synapse/issues/6349))
- Add a develop script to generate full SQL schemas. ([\#6394](https://github.com/matrix-org/synapse/issues/6394))
-- Add custom SAML username mapping functinality through an external provider plugin. ([\#6411](https://github.com/matrix-org/synapse/issues/6411))
+- Add custom SAML username mapping functionality through an external provider plugin. ([\#6411](https://github.com/matrix-org/synapse/issues/6411))
- Automatically delete empty groups/communities. ([\#6453](https://github.com/matrix-org/synapse/issues/6453))
- Add option `limit_profile_requests_to_users_who_share_rooms` to prevent requirement of a local user sharing a room with another user to query their profile information. ([\#6523](https://github.com/matrix-org/synapse/issues/6523))
- Add an `export_signing_key` script to extract the public part of signing keys when rotating them. ([\#6546](https://github.com/matrix-org/synapse/issues/6546))
diff --git a/changelog.d/5742.feature b/changelog.d/5742.feature
new file mode 100644
index 0000000000..de10302275
--- /dev/null
+++ b/changelog.d/5742.feature
@@ -0,0 +1 @@
+Allow admin to create or modify a user. Contributed by Awesome Technologies Innovationslabor GmbH.
diff --git a/changelog.d/6621.doc b/changelog.d/6621.doc
new file mode 100644
index 0000000000..6722ccfda3
--- /dev/null
+++ b/changelog.d/6621.doc
@@ -0,0 +1 @@
+Fix a typo in the configuration example for purge jobs in the sample configuration file.
diff --git a/changelog.d/6624.doc b/changelog.d/6624.doc
new file mode 100644
index 0000000000..bc9a022db2
--- /dev/null
+++ b/changelog.d/6624.doc
@@ -0,0 +1 @@
+Add complete documentation of the message retention policies support.
diff --git a/changelog.d/6654.bugfix b/changelog.d/6654.bugfix
new file mode 100644
index 0000000000..fed35252db
--- /dev/null
+++ b/changelog.d/6654.bugfix
@@ -0,0 +1 @@
+Correctly proxy HTTP errors due to API calls to remote group servers.
diff --git a/changelog.d/6656.doc b/changelog.d/6656.doc
new file mode 100644
index 0000000000..9f32da1a88
--- /dev/null
+++ b/changelog.d/6656.doc
@@ -0,0 +1 @@
+No more overriding the entire /etc folder of the container in docker-compose.yaml. Contributed by Fabian Meyer.
diff --git a/changelog.d/6664.bugfix b/changelog.d/6664.bugfix
new file mode 100644
index 0000000000..8c6a6fa1c8
--- /dev/null
+++ b/changelog.d/6664.bugfix
@@ -0,0 +1 @@
+Fix media repo admin APIs when using a media worker.
diff --git a/changelog.d/6665.doc b/changelog.d/6665.doc
new file mode 100644
index 0000000000..bc9a022db2
--- /dev/null
+++ b/changelog.d/6665.doc
@@ -0,0 +1 @@
+Add complete documentation of the message retention policies support.
diff --git a/contrib/docker/docker-compose.yml b/contrib/docker/docker-compose.yml
index 72c87054e5..2b044baf78 100644
--- a/contrib/docker/docker-compose.yml
+++ b/contrib/docker/docker-compose.yml
@@ -18,7 +18,7 @@ services:
- SYNAPSE_CONFIG_PATH=/etc/homeserver.yaml
volumes:
# You may either store all the files in a local folder
- - ./matrix-config:/etc
+ - ./matrix-config/homeserver.yaml:/etc/homeserver.yaml
- ./files:/data
# .. or you may split this between different storage points
# - ./files:/data
diff --git a/docs/admin_api/user_admin_api.rst b/docs/admin_api/user_admin_api.rst
index b451dc5014..0b3d09d694 100644
--- a/docs/admin_api/user_admin_api.rst
+++ b/docs/admin_api/user_admin_api.rst
@@ -1,3 +1,33 @@
+Create or modify Account
+========================
+
+This API allows an administrator to create or modify a user account with a
+specific ``user_id``.
+
+This api is::
+
+ PUT /_synapse/admin/v2/users/<user_id>
+
+with a body of:
+
+.. code:: json
+
+ {
+ "password": "user_password",
+ "displayname": "User",
+ "avatar_url": "<avatar_url>",
+ "admin": false,
+ "deactivated": false
+ }
+
+including an ``access_token`` of a server admin.
+
+The parameter ``displayname`` is optional and defaults to ``user_id``.
+The parameter ``avatar_url`` is optional.
+The parameter ``admin`` is optional and defaults to 'false'.
+The parameter ``deactivated`` is optional and defaults to 'false'.
+If the user already exists then optional parameters default to the current value.
+
List Accounts
=============
@@ -50,7 +80,8 @@ This API returns information about a specific user account.
The api is::
- GET /_synapse/admin/v1/whois/<user_id>
+ GET /_synapse/admin/v1/whois/<user_id> (deprecated)
+ GET /_synapse/admin/v2/users/<user_id>
including an ``access_token`` of a server admin.
diff --git a/docs/message_retention_policies.md b/docs/message_retention_policies.md
new file mode 100644
index 0000000000..4300809dfe
--- /dev/null
+++ b/docs/message_retention_policies.md
@@ -0,0 +1,191 @@
+# Message retention policies
+
+Synapse admins can enable support for message retention policies on
+their homeserver. Message retention policies exist at a room level,
+follow the semantics described in
+[MSC1763](https://github.com/matrix-org/matrix-doc/blob/matthew/msc1763/proposals/1763-configurable-retention-periods.md),
+and allow server and room admins to configure how long messages should
+be kept in a homeserver's database before being purged from it.
+**Please note that, as this feature isn't part of the Matrix
+specification yet, this implementation is to be considered as
+experimental.**
+
+A message retention policy is mainly defined by its `max_lifetime`
+parameter, which defines how long a message can be kept around after
+it was sent to the room. If a room doesn't have a message retention
+policy, and there's no default one for a given server, then no message
+sent in that room is ever purged on that server.
+
+MSC1763 also specifies semantics for a `min_lifetime` parameter which
+defines the amount of time after which an event _can_ get purged (after
+it was sent to the room), but Synapse doesn't currently support it
+beyond registering it.
+
+Both `max_lifetime` and `min_lifetime` are optional parameters.
+
+Note that message retention policies don't apply to state events.
+
+Once an event reaches its expiry date (defined as the time it was sent
+plus the value for `max_lifetime` in the room), two things happen:
+
+* Synapse stops serving the event to clients via any endpoint.
+* The message gets picked up by the next purge job (see the "Purge jobs"
+ section) and is removed from Synapse's database.
+
+Since purge jobs don't run continuously, this means that an event might
+stay in a server's database for longer than the value for `max_lifetime`
+in the room would allow, though hidden from clients.
+
+Similarly, if a server (with support for message retention policies
+enabled) receives from another server an event that should have been
+purged according to its room's policy, then the receiving server will
+process and store that event until it's picked up by the next purge job,
+though it will always hide it from clients.
+
+
+## Server configuration
+
+Support for this feature can be enabled and configured in the
+`retention` section of the Synapse configuration file (see the
+[sample file](https://github.com/matrix-org/synapse/blob/v1.7.3/docs/sample_config.yaml#L332-L393)).
+
+To enable support for message retention policies, set the setting
+`enabled` in this section to `true`.
+
+
+### Default policy
+
+A default message retention policy is a policy defined in Synapse's
+configuration that is used by Synapse for every room that doesn't have a
+message retention policy configured in its state. This allows server
+admins to ensure that messages are never kept indefinitely in a server's
+database.
+
+A default policy can be defined as such, in the `retention` section of
+the configuration file:
+
+```yaml
+ default_policy:
+ min_lifetime: 1d
+ max_lifetime: 1y
+```
+
+Here, `min_lifetime` and `max_lifetime` have the same meaning and level
+of support as previously described. They can be expressed either as a
+duration (using the units `s` (seconds), `m` (minutes), `h` (hours),
+`d` (days), `w` (weeks) and `y` (years)) or as a number of milliseconds.
+
+
+### Purge jobs
+
+Purge jobs are the jobs that Synapse runs in the background to purge
+expired events from the database. They are only run if support for
+message retention policies is enabled in the server's configuration. If
+no configuration for purge jobs is configured by the server admin,
+Synapse will use a default configuration, which is described in the
+[sample configuration file](https://github.com/matrix-org/synapse/blob/master/docs/sample_config.yaml#L332-L393).
+
+Some server admins might want a finer control on when events are removed
+depending on an event's room's policy. This can be done by setting the
+`purge_jobs` sub-section in the `retention` section of the configuration
+file. An example of such configuration could be:
+
+```yaml
+ purge_jobs:
+ - longest_max_lifetime: 3d
+ interval: 12h
+ - shortest_max_lifetime: 3d
+ longest_max_lifetime: 1w
+ interval: 1d
+ - shortest_max_lifetime: 1w
+ interval: 2d
+```
+
+In this example, we define three jobs:
+
+* one that runs twice a day (every 12 hours) and purges events in rooms
+ which policy's `max_lifetime` is lower or equal to 3 days.
+* one that runs once a day and purges events in rooms which policy's
+ `max_lifetime` is between 3 days and a week.
+* one that runs once every 2 days and purges events in rooms which
+ policy's `max_lifetime` is greater than a week.
+
+Note that this example is tailored to show different configurations and
+features slightly more jobs than it's probably necessary (in practice, a
+server admin would probably consider it better to replace the two last
+jobs with one that runs once a day and handles rooms which which
+policy's `max_lifetime` is greater than 3 days).
+
+Keep in mind, when configuring these jobs, that a purge job can become
+quite heavy on the server if it targets many rooms, therefore prefer
+having jobs with a low interval that target a limited set of rooms. Also
+make sure to include a job with no minimum and one with no maximum to
+make sure your configuration handles every policy.
+
+As previously mentioned in this documentation, while a purge job that
+runs e.g. every day means that an expired event might stay in the
+database for up to a day after its expiry, Synapse hides expired events
+from clients as soon as they expire, so the event is not visible to
+local users between its expiry date and the moment it gets purged from
+the server's database.
+
+
+### Lifetime limits
+
+**Note: this feature is mainly useful within a closed federation or on
+servers that don't federate, because there currently is no way to
+enforce these limits in an open federation.**
+
+Server admins can restrict the values their local users are allowed to
+use for both `min_lifetime` and `max_lifetime`. These limits can be
+defined as such in the `retention` section of the configuration file:
+
+```yaml
+ allowed_lifetime_min: 1d
+ allowed_lifetime_max: 1y
+```
+
+Here, `allowed_lifetime_min` is the lowest value a local user can set
+for both `min_lifetime` and `max_lifetime`, and `allowed_lifetime_max`
+is the highest value. Both parameters are optional (e.g. setting
+`allowed_lifetime_min` but not `allowed_lifetime_max` only enforces a
+minimum and no maximum).
+
+Like other settings in this section, these parameters can be expressed
+either as a duration or as a number of milliseconds.
+
+
+## Room configuration
+
+To configure a room's message retention policy, a room's admin or
+moderator needs to send a state event in that room with the type
+`m.room.retention` and the following content:
+
+```json
+{
+ "max_lifetime": ...
+}
+```
+
+In this event's content, the `max_lifetime` parameter has the same
+meaning as previously described, and needs to be expressed in
+milliseconds. The event's content can also include a `min_lifetime`
+parameter, which has the same meaning and limited support as previously
+described.
+
+Note that over every server in the room, only the ones with support for
+message retention policies will actually remove expired events. This
+support is currently not enabled by default in Synapse.
+
+
+## Note on reclaiming disk space
+
+While purge jobs actually delete data from the database, the disk space
+used by the database might not decrease immediately on the database's
+host. However, even though the database engine won't free up the disk
+space, it will start writing new data into where the purged data was.
+
+If you want to reclaim the freed disk space anyway and return it to the
+operating system, the server admin needs to run `VACUUM FULL;` (or
+`VACUUM;` for SQLite databases) on Synapse's database (see the related
+[PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-vacuum.html)).
diff --git a/docs/sample_config.yaml b/docs/sample_config.yaml
index fad5f968b5..0a2505e7bb 100644
--- a/docs/sample_config.yaml
+++ b/docs/sample_config.yaml
@@ -387,17 +387,17 @@ retention:
#
# The rationale for this per-job configuration is that some rooms might have a
# retention policy with a low 'max_lifetime', where history needs to be purged
- # of outdated messages on a very frequent basis (e.g. every 5min), but not want
- # that purge to be performed by a job that's iterating over every room it knows,
- # which would be quite heavy on the server.
+ # of outdated messages on a more frequent basis than for the rest of the rooms
+ # (e.g. every 12h), but not want that purge to be performed by a job that's
+ # iterating over every room it knows, which could be heavy on the server.
#
#purge_jobs:
# - shortest_max_lifetime: 1d
# longest_max_lifetime: 3d
- # interval: 5m:
+ # interval: 12h
# - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y
- # interval: 24h
+ # interval: 1d
## TLS ##
diff --git a/synapse/app/media_repository.py b/synapse/app/media_repository.py
index a63c53dc44..5b5832214a 100644
--- a/synapse/app/media_repository.py
+++ b/synapse/app/media_repository.py
@@ -34,6 +34,7 @@ from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
+from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.admin import register_servlets_for_media_repo
@@ -47,6 +48,7 @@ logger = logging.getLogger("synapse.app.media_repository")
class MediaRepositorySlavedStore(
+ RoomStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedClientIpStore,
diff --git a/synapse/config/server.py b/synapse/config/server.py
index 38f6ff9edc..9ac112233b 100644
--- a/synapse/config/server.py
+++ b/synapse/config/server.py
@@ -948,17 +948,17 @@ class ServerConfig(Config):
#
# The rationale for this per-job configuration is that some rooms might have a
# retention policy with a low 'max_lifetime', where history needs to be purged
- # of outdated messages on a very frequent basis (e.g. every 5min), but not want
- # that purge to be performed by a job that's iterating over every room it knows,
- # which would be quite heavy on the server.
+ # of outdated messages on a more frequent basis than for the rest of the rooms
+ # (e.g. every 12h), but not want that purge to be performed by a job that's
+ # iterating over every room it knows, which could be heavy on the server.
#
#purge_jobs:
# - shortest_max_lifetime: 1d
# longest_max_lifetime: 3d
- # interval: 5m:
+ # interval: 12h
# - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y
- # interval: 24h
+ # interval: 1d
"""
% locals()
)
diff --git a/synapse/handlers/admin.py b/synapse/handlers/admin.py
index 1a4ba12385..76d18a8ba8 100644
--- a/synapse/handlers/admin.py
+++ b/synapse/handlers/admin.py
@@ -51,6 +51,15 @@ class AdminHandler(BaseHandler):
return ret
+ async def get_user(self, user):
+ """Function to get user details"""
+ ret = await self.store.get_user_by_id(user.to_string())
+ if ret:
+ profile = await self.store.get_profileinfo(user.localpart)
+ ret["displayname"] = profile.display_name
+ ret["avatar_url"] = profile.avatar_url
+ return ret
+
async def get_users(self):
"""Function to retrieve a list of users in users table.
diff --git a/synapse/handlers/groups_local.py b/synapse/handlers/groups_local.py
index 92fecbfc44..319565510f 100644
--- a/synapse/handlers/groups_local.py
+++ b/synapse/handlers/groups_local.py
@@ -130,6 +130,8 @@ class GroupsLocalHandler(object):
res = yield self.transport_client.get_group_summary(
get_domain_from_id(group_id), group_id, requester_user_id
)
+ except HttpResponseException as e:
+ raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
@@ -190,6 +192,8 @@ class GroupsLocalHandler(object):
res = yield self.transport_client.create_group(
get_domain_from_id(group_id), group_id, user_id, content
)
+ except HttpResponseException as e:
+ raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
@@ -231,6 +235,8 @@ class GroupsLocalHandler(object):
res = yield self.transport_client.get_users_in_group(
get_domain_from_id(group_id), group_id, requester_user_id
)
+ except HttpResponseException as e:
+ raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
@@ -271,6 +277,8 @@ class GroupsLocalHandler(object):
res = yield self.transport_client.join_group(
get_domain_from_id(group_id), group_id, user_id, content
)
+ except HttpResponseException as e:
+ raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
@@ -315,6 +323,8 @@ class GroupsLocalHandler(object):
res = yield self.transport_client.accept_group_invite(
get_domain_from_id(group_id), group_id, user_id, content
)
+ except HttpResponseException as e:
+ raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
@@ -361,6 +371,8 @@ class GroupsLocalHandler(object):
requester_user_id,
content,
)
+ except HttpResponseException as e:
+ raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
@@ -424,6 +436,8 @@ class GroupsLocalHandler(object):
user_id,
content,
)
+ except HttpResponseException as e:
+ raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
@@ -460,6 +474,8 @@ class GroupsLocalHandler(object):
bulk_result = yield self.transport_client.bulk_get_publicised_groups(
get_domain_from_id(user_id), [user_id]
)
+ except HttpResponseException as e:
+ raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
diff --git a/synapse/rest/admin/__init__.py b/synapse/rest/admin/__init__.py
index c122c449f4..a10b4a9b72 100644
--- a/synapse/rest/admin/__init__.py
+++ b/synapse/rest/admin/__init__.py
@@ -38,6 +38,7 @@ from synapse.rest.admin.users import (
SearchUsersRestServlet,
UserAdminServlet,
UserRegisterServlet,
+ UserRestServletV2,
UsersRestServlet,
UsersRestServletV2,
WhoisRestServlet,
@@ -191,6 +192,7 @@ def register_servlets(hs, http_server):
SendServerNoticeServlet(hs).register(http_server)
VersionServlet(hs).register(http_server)
UserAdminServlet(hs).register(http_server)
+ UserRestServletV2(hs).register(http_server)
UsersRestServletV2(hs).register(http_server)
diff --git a/synapse/rest/admin/users.py b/synapse/rest/admin/users.py
index 1937879dbe..574cb90c74 100644
--- a/synapse/rest/admin/users.py
+++ b/synapse/rest/admin/users.py
@@ -102,6 +102,148 @@ class UsersRestServletV2(RestServlet):
return 200, ret
+class UserRestServletV2(RestServlet):
+ PATTERNS = (re.compile("^/_synapse/admin/v2/users/(?P<user_id>@[^/]+)$"),)
+
+ """Get request to list user details.
+ This needs user to have administrator access in Synapse.
+
+ GET /_synapse/admin/v2/users/<user_id>
+
+ returns:
+ 200 OK with user details if success otherwise an error.
+
+ Put request to allow an administrator to add or modify a user.
+ This needs user to have administrator access in Synapse.
+ We use PUT instead of POST since we already know the id of the user
+ object to create. POST could be used to create guests.
+
+ PUT /_synapse/admin/v2/users/<user_id>
+ {
+ "password": "secret",
+ "displayname": "User"
+ }
+
+ returns:
+ 201 OK with new user object if user was created or
+ 200 OK with modified user object if user was modified
+ otherwise an error.
+ """
+
+ def __init__(self, hs):
+ self.hs = hs
+ self.auth = hs.get_auth()
+ self.admin_handler = hs.get_handlers().admin_handler
+ self.profile_handler = hs.get_profile_handler()
+ self.set_password_handler = hs.get_set_password_handler()
+ self.deactivate_account_handler = hs.get_deactivate_account_handler()
+ self.registration_handler = hs.get_registration_handler()
+
+ async def on_GET(self, request, user_id):
+ await assert_requester_is_admin(self.auth, request)
+
+ target_user = UserID.from_string(user_id)
+ if not self.hs.is_mine(target_user):
+ raise SynapseError(400, "Can only lookup local users")
+
+ ret = await self.admin_handler.get_user(target_user)
+
+ return 200, ret
+
+ async def on_PUT(self, request, user_id):
+ await assert_requester_is_admin(self.auth, request)
+
+ target_user = UserID.from_string(user_id)
+ body = parse_json_object_from_request(request)
+
+ if not self.hs.is_mine(target_user):
+ raise SynapseError(400, "This endpoint can only be used with local users")
+
+ user = await self.admin_handler.get_user(target_user)
+
+ if user: # modify user
+ requester = await self.auth.get_user_by_req(request)
+
+ if "displayname" in body:
+ await self.profile_handler.set_displayname(
+ target_user, requester, body["displayname"], True
+ )
+
+ if "avatar_url" in body:
+ await self.profile_handler.set_avatar_url(
+ target_user, requester, body["avatar_url"], True
+ )
+
+ if "admin" in body:
+ set_admin_to = bool(body["admin"])
+ if set_admin_to != user["admin"]:
+ auth_user = requester.user
+ if target_user == auth_user and not set_admin_to:
+ raise SynapseError(400, "You may not demote yourself.")
+
+ await self.admin_handler.set_user_server_admin(
+ target_user, set_admin_to
+ )
+
+ if "password" in body:
+ if (
+ not isinstance(body["password"], text_type)
+ or len(body["password"]) > 512
+ ):
+ raise SynapseError(400, "Invalid password")
+ else:
+ new_password = body["password"]
+ await self._set_password_handler.set_password(
+ target_user, new_password, requester
+ )
+
+ if "deactivated" in body:
+ deactivate = bool(body["deactivated"])
+ if deactivate and not user["deactivated"]:
+ result = await self.deactivate_account_handler.deactivate_account(
+ target_user.to_string(), False
+ )
+ if not result:
+ raise SynapseError(500, "Could not deactivate user")
+
+ user = await self.admin_handler.get_user(target_user)
+ return 200, user
+
+ else: # create user
+ if "password" not in body:
+ raise SynapseError(
+ 400, "password must be specified", errcode=Codes.BAD_JSON
+ )
+ elif (
+ not isinstance(body["password"], text_type)
+ or len(body["password"]) > 512
+ ):
+ raise SynapseError(400, "Invalid password")
+
+ admin = body.get("admin", None)
+ user_type = body.get("user_type", None)
+ displayname = body.get("displayname", None)
+
+ if user_type is not None and user_type not in UserTypes.ALL_USER_TYPES:
+ raise SynapseError(400, "Invalid user type")
+
+ user_id = await self.registration_handler.register_user(
+ localpart=target_user.localpart,
+ password=body["password"],
+ admin=bool(admin),
+ default_display_name=displayname,
+ user_type=user_type,
+ )
+ if "avatar_url" in body:
+ await self.profile_handler.set_avatar_url(
+ user_id, requester, body["avatar_url"], True
+ )
+
+ ret = await self.admin_handler.get_user(target_user)
+
+ return 201, ret
+
+
class UserRegisterServlet(RestServlet):
"""
Attributes:
diff --git a/synapse/storage/data_stores/main/registration.py b/synapse/storage/data_stores/main/registration.py
index 5e8ecac0ea..cb4b2b39a0 100644
--- a/synapse/storage/data_stores/main/registration.py
+++ b/synapse/storage/data_stores/main/registration.py
@@ -52,11 +52,13 @@ class RegistrationWorkerStore(SQLBaseStore):
"name",
"password_hash",
"is_guest",
+ "admin",
"consent_version",
"consent_server_notice_sent",
"appservice_id",
"creation_ts",
"user_type",
+ "deactivated",
],
allow_none=True,
desc="get_user_by_id",
diff --git a/synapse/storage/data_stores/main/room.py b/synapse/storage/data_stores/main/room.py
index 79cfd39194..8636d75030 100644
--- a/synapse/storage/data_stores/main/room.py
+++ b/synapse/storage/data_stores/main/room.py
@@ -366,6 +366,134 @@ class RoomWorkerStore(SQLBaseStore):
defer.returnValue(row)
+ def get_media_mxcs_in_room(self, room_id):
+ """Retrieves all the local and remote media MXC URIs in a given room
+
+ Args:
+ room_id (str)
+
+ Returns:
+ The local and remote media as a lists of tuples where the key is
+ the hostname and the value is the media ID.
+ """
+
+ def _get_media_mxcs_in_room_txn(txn):
+ local_mxcs, remote_mxcs = self._get_media_mxcs_in_room_txn(txn, room_id)
+ local_media_mxcs = []
+ remote_media_mxcs = []
+
+ # Convert the IDs to MXC URIs
+ for media_id in local_mxcs:
+ local_media_mxcs.append("mxc://%s/%s" % (self.hs.hostname, media_id))
+ for hostname, media_id in remote_mxcs:
+ remote_media_mxcs.append("mxc://%s/%s" % (hostname, media_id))
+
+ return local_media_mxcs, remote_media_mxcs
+
+ return self.db.runInteraction(
+ "get_media_ids_in_room", _get_media_mxcs_in_room_txn
+ )
+
+ def quarantine_media_ids_in_room(self, room_id, quarantined_by):
+ """For a room loops through all events with media and quarantines
+ the associated media
+ """
+
+ def _quarantine_media_in_room_txn(txn):
+ local_mxcs, remote_mxcs = self._get_media_mxcs_in_room_txn(txn, room_id)
+ total_media_quarantined = 0
+
+ # Now update all the tables to set the quarantined_by flag
+
+ txn.executemany(
+ """
+ UPDATE local_media_repository
+ SET quarantined_by = ?
+ WHERE media_id = ?
+ """,
+ ((quarantined_by, media_id) for media_id in local_mxcs),
+ )
+
+ txn.executemany(
+ """
+ UPDATE remote_media_cache
+ SET quarantined_by = ?
+ WHERE media_origin = ? AND media_id = ?
+ """,
+ (
+ (quarantined_by, origin, media_id)
+ for origin, media_id in remote_mxcs
+ ),
+ )
+
+ total_media_quarantined += len(local_mxcs)
+ total_media_quarantined += len(remote_mxcs)
+
+ return total_media_quarantined
+
+ return self.db.runInteraction(
+ "quarantine_media_in_room", _quarantine_media_in_room_txn
+ )
+
+ def _get_media_mxcs_in_room_txn(self, txn, room_id):
+ """Retrieves all the local and remote media MXC URIs in a given room
+
+ Args:
+ txn (cursor)
+ room_id (str)
+
+ Returns:
+ The local and remote media as a lists of tuples where the key is
+ the hostname and the value is the media ID.
+ """
+ mxc_re = re.compile("^mxc://([^/]+)/([^/#?]+)")
+
+ sql = """
+ SELECT stream_ordering, json FROM events
+ JOIN event_json USING (room_id, event_id)
+ WHERE room_id = ?
+ %(where_clause)s
+ AND contains_url = ? AND outlier = ?
+ ORDER BY stream_ordering DESC
+ LIMIT ?
+ """
+ txn.execute(sql % {"where_clause": ""}, (room_id, True, False, 100))
+
+ local_media_mxcs = []
+ remote_media_mxcs = []
+
+ while True:
+ next_token = None
+ for stream_ordering, content_json in txn:
+ next_token = stream_ordering
+ event_json = json.loads(content_json)
+ content = event_json["content"]
+ content_url = content.get("url")
+ thumbnail_url = content.get("info", {}).get("thumbnail_url")
+
+ for url in (content_url, thumbnail_url):
+ if not url:
+ continue
+ matches = mxc_re.match(url)
+ if matches:
+ hostname = matches.group(1)
+ media_id = matches.group(2)
+ if hostname == self.hs.hostname:
+ local_media_mxcs.append(media_id)
+ else:
+ remote_media_mxcs.append((hostname, media_id))
+
+ if next_token is None:
+ # We've gone through the whole room, so we're finished.
+ break
+
+ txn.execute(
+ sql % {"where_clause": "AND stream_ordering < ?"},
+ (room_id, next_token, True, False, 100),
+ )
+
+ return local_media_mxcs, remote_media_mxcs
+
class RoomBackgroundUpdateStore(SQLBaseStore):
REMOVE_TOMESTONED_ROOMS_BG_UPDATE = "remove_tombstoned_rooms_from_directory"
@@ -810,126 +938,6 @@ class RoomStore(RoomBackgroundUpdateStore, RoomWorkerStore, SearchStore):
(room_id,),
)
- def get_media_mxcs_in_room(self, room_id):
- """Retrieves all the local and remote media MXC URIs in a given room
-
- Args:
- room_id (str)
-
- Returns:
- The local and remote media as a lists of tuples where the key is
- the hostname and the value is the media ID.
- """
-
- def _get_media_mxcs_in_room_txn(txn):
- local_mxcs, remote_mxcs = self._get_media_mxcs_in_room_txn(txn, room_id)
- local_media_mxcs = []
- remote_media_mxcs = []
-
- # Convert the IDs to MXC URIs
- for media_id in local_mxcs:
- local_media_mxcs.append("mxc://%s/%s" % (self.hs.hostname, media_id))
- for hostname, media_id in remote_mxcs:
- remote_media_mxcs.append("mxc://%s/%s" % (hostname, media_id))
-
- return local_media_mxcs, remote_media_mxcs
-
- return self.db.runInteraction(
- "get_media_ids_in_room", _get_media_mxcs_in_room_txn
- )
-
- def quarantine_media_ids_in_room(self, room_id, quarantined_by):
- """For a room loops through all events with media and quarantines
- the associated media
- """
-
- def _quarantine_media_in_room_txn(txn):
- local_mxcs, remote_mxcs = self._get_media_mxcs_in_room_txn(txn, room_id)
- total_media_quarantined = 0
-
- # Now update all the tables to set the quarantined_by flag
-
- txn.executemany(
- """
- UPDATE local_media_repository
- SET quarantined_by = ?
- WHERE media_id = ?
- """,
- ((quarantined_by, media_id) for media_id in local_mxcs),
- )
-
- txn.executemany(
- """
- UPDATE remote_media_cache
- SET quarantined_by = ?
- WHERE media_origin = ? AND media_id = ?
- """,
- (
- (quarantined_by, origin, media_id)
- for origin, media_id in remote_mxcs
- ),
- )
-
- total_media_quarantined += len(local_mxcs)
- total_media_quarantined += len(remote_mxcs)
-
- return total_media_quarantined
-
- return self.db.runInteraction(
- "quarantine_media_in_room", _quarantine_media_in_room_txn
- )
-
- def _get_media_mxcs_in_room_txn(self, txn, room_id):
- """Retrieves all the local and remote media MXC URIs in a given room
-
- Args:
- txn (cursor)
- room_id (str)
-
- Returns:
- The local and remote media as a lists of tuples where the key is
- the hostname and the value is the media ID.
- """
- mxc_re = re.compile("^mxc://([^/]+)/([^/#?]+)")
-
- next_token = self.get_current_events_token() + 1
- local_media_mxcs = []
- remote_media_mxcs = []
-
- while next_token:
- sql = """
- SELECT stream_ordering, json FROM events
- JOIN event_json USING (room_id, event_id)
- WHERE room_id = ?
- AND stream_ordering < ?
- AND contains_url = ? AND outlier = ?
- ORDER BY stream_ordering DESC
- LIMIT ?
- """
- txn.execute(sql, (room_id, next_token, True, False, 100))
-
- next_token = None
- for stream_ordering, content_json in txn:
- next_token = stream_ordering
- event_json = json.loads(content_json)
- content = event_json["content"]
- content_url = content.get("url")
- thumbnail_url = content.get("info", {}).get("thumbnail_url")
-
- for url in (content_url, thumbnail_url):
- if not url:
- continue
- matches = mxc_re.match(url)
- if matches:
- hostname = matches.group(1)
- media_id = matches.group(2)
- if hostname == self.hs.hostname:
- local_media_mxcs.append(media_id)
- else:
- remote_media_mxcs.append((hostname, media_id))
-
- return local_media_mxcs, remote_media_mxcs
-
@defer.inlineCallbacks
def get_rooms_for_retention_period_in_range(
self, min_ms, max_ms, include_null=False
diff --git a/tests/rest/admin/test_admin.py b/tests/rest/admin/test_admin.py
index 325bd6a608..6ceb483aa8 100644
--- a/tests/rest/admin/test_admin.py
+++ b/tests/rest/admin/test_admin.py
@@ -13,14 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-import hashlib
-import hmac
import json
from mock import Mock
import synapse.rest.admin
-from synapse.api.constants import UserTypes
from synapse.http.server import JsonResource
from synapse.rest.admin import VersionServlet
from synapse.rest.client.v1 import events, login, room
@@ -47,341 +44,6 @@ class VersionTestCase(unittest.HomeserverTestCase):
)
-class UserRegisterTestCase(unittest.HomeserverTestCase):
-
- servlets = [synapse.rest.admin.register_servlets_for_client_rest_resource]
-
- def make_homeserver(self, reactor, clock):
-
- self.url = "/_matrix/client/r0/admin/register"
-
- self.registration_handler = Mock()
- self.identity_handler = Mock()
- self.login_handler = Mock()
- self.device_handler = Mock()
- self.device_handler.check_device_registered = Mock(return_value="FAKE")
-
- self.datastore = Mock(return_value=Mock())
- self.datastore.get_current_state_deltas = Mock(return_value=(0, []))
-
- self.secrets = Mock()
-
- self.hs = self.setup_test_homeserver()
-
- self.hs.config.registration_shared_secret = "shared"
-
- self.hs.get_media_repository = Mock()
- self.hs.get_deactivate_account_handler = Mock()
-
- return self.hs
-
- def test_disabled(self):
- """
- If there is no shared secret, registration through this method will be
- prevented.
- """
- self.hs.config.registration_shared_secret = None
-
- request, channel = self.make_request("POST", self.url, b"{}")
- self.render(request)
-
- self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual(
- "Shared secret registration is not enabled", channel.json_body["error"]
- )
-
- def test_get_nonce(self):
- """
- Calling GET on the endpoint will return a randomised nonce, using the
- homeserver's secrets provider.
- """
- secrets = Mock()
- secrets.token_hex = Mock(return_value="abcd")
-
- self.hs.get_secrets = Mock(return_value=secrets)
-
- request, channel = self.make_request("GET", self.url)
- self.render(request)
-
- self.assertEqual(channel.json_body, {"nonce": "abcd"})
-
- def test_expired_nonce(self):
- """
- Calling GET on the endpoint will return a randomised nonce, which will
- only last for SALT_TIMEOUT (60s).
- """
- request, channel = self.make_request("GET", self.url)
- self.render(request)
- nonce = channel.json_body["nonce"]
-
- # 59 seconds
- self.reactor.advance(59)
-
- body = json.dumps({"nonce": nonce})
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("username must be specified", channel.json_body["error"])
-
- # 61 seconds
- self.reactor.advance(2)
-
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("unrecognised nonce", channel.json_body["error"])
-
- def test_register_incorrect_nonce(self):
- """
- Only the provided nonce can be used, as it's checked in the MAC.
- """
- request, channel = self.make_request("GET", self.url)
- self.render(request)
- nonce = channel.json_body["nonce"]
-
- want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
- want_mac.update(b"notthenonce\x00bob\x00abc123\x00admin")
- want_mac = want_mac.hexdigest()
-
- body = json.dumps(
- {
- "nonce": nonce,
- "username": "bob",
- "password": "abc123",
- "admin": True,
- "mac": want_mac,
- }
- )
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("HMAC incorrect", channel.json_body["error"])
-
- def test_register_correct_nonce(self):
- """
- When the correct nonce is provided, and the right key is provided, the
- user is registered.
- """
- request, channel = self.make_request("GET", self.url)
- self.render(request)
- nonce = channel.json_body["nonce"]
-
- want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
- want_mac.update(
- nonce.encode("ascii") + b"\x00bob\x00abc123\x00admin\x00support"
- )
- want_mac = want_mac.hexdigest()
-
- body = json.dumps(
- {
- "nonce": nonce,
- "username": "bob",
- "password": "abc123",
- "admin": True,
- "user_type": UserTypes.SUPPORT,
- "mac": want_mac,
- }
- )
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("@bob:test", channel.json_body["user_id"])
-
- def test_nonce_reuse(self):
- """
- A valid unrecognised nonce.
- """
- request, channel = self.make_request("GET", self.url)
- self.render(request)
- nonce = channel.json_body["nonce"]
-
- want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
- want_mac.update(nonce.encode("ascii") + b"\x00bob\x00abc123\x00admin")
- want_mac = want_mac.hexdigest()
-
- body = json.dumps(
- {
- "nonce": nonce,
- "username": "bob",
- "password": "abc123",
- "admin": True,
- "mac": want_mac,
- }
- )
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("@bob:test", channel.json_body["user_id"])
-
- # Now, try and reuse it
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("unrecognised nonce", channel.json_body["error"])
-
- def test_missing_parts(self):
- """
- Synapse will complain if you don't give nonce, username, password, and
- mac. Admin and user_types are optional. Additional checks are done for length
- and type.
- """
-
- def nonce():
- request, channel = self.make_request("GET", self.url)
- self.render(request)
- return channel.json_body["nonce"]
-
- #
- # Nonce check
- #
-
- # Must be present
- body = json.dumps({})
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("nonce must be specified", channel.json_body["error"])
-
- #
- # Username checks
- #
-
- # Must be present
- body = json.dumps({"nonce": nonce()})
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("username must be specified", channel.json_body["error"])
-
- # Must be a string
- body = json.dumps({"nonce": nonce(), "username": 1234})
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("Invalid username", channel.json_body["error"])
-
- # Must not have null bytes
- body = json.dumps({"nonce": nonce(), "username": "abcd\u0000"})
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("Invalid username", channel.json_body["error"])
-
- # Must not have null bytes
- body = json.dumps({"nonce": nonce(), "username": "a" * 1000})
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("Invalid username", channel.json_body["error"])
-
- #
- # Password checks
- #
-
- # Must be present
- body = json.dumps({"nonce": nonce(), "username": "a"})
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("password must be specified", channel.json_body["error"])
-
- # Must be a string
- body = json.dumps({"nonce": nonce(), "username": "a", "password": 1234})
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("Invalid password", channel.json_body["error"])
-
- # Must not have null bytes
- body = json.dumps({"nonce": nonce(), "username": "a", "password": "abcd\u0000"})
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("Invalid password", channel.json_body["error"])
-
- # Super long
- body = json.dumps({"nonce": nonce(), "username": "a", "password": "A" * 1000})
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("Invalid password", channel.json_body["error"])
-
- #
- # user_type check
- #
-
- # Invalid user_type
- body = json.dumps(
- {
- "nonce": nonce(),
- "username": "a",
- "password": "1234",
- "user_type": "invalid",
- }
- )
- request, channel = self.make_request("POST", self.url, body.encode("utf8"))
- self.render(request)
-
- self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("Invalid user type", channel.json_body["error"])
-
-
-class UsersListTestCase(unittest.HomeserverTestCase):
-
- servlets = [
- synapse.rest.admin.register_servlets,
- login.register_servlets,
- ]
- url = "/_synapse/admin/v2/users"
-
- def prepare(self, reactor, clock, hs):
- self.admin_user = self.register_user("admin", "pass", admin=True)
- self.admin_user_tok = self.login("admin", "pass")
-
- self.register_user("user1", "pass1", admin=False)
- self.register_user("user2", "pass2", admin=False)
-
- def test_no_auth(self):
- """
- Try to list users without authentication.
- """
- request, channel = self.make_request("GET", self.url, b"{}")
- self.render(request)
-
- self.assertEqual(401, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual("M_MISSING_TOKEN", channel.json_body["errcode"])
-
- def test_all_users(self):
- """
- List all users, including deactivated users.
- """
- request, channel = self.make_request(
- "GET",
- self.url + "?deactivated=true",
- b"{}",
- access_token=self.admin_user_tok,
- )
- self.render(request)
-
- self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
- self.assertEqual(3, len(channel.json_body["users"]))
-
-
class ShutdownRoomTestCase(unittest.HomeserverTestCase):
servlets = [
synapse.rest.admin.register_servlets_for_client_rest_resource,
diff --git a/tests/rest/admin/test_user.py b/tests/rest/admin/test_user.py
new file mode 100644
index 0000000000..7352d609e6
--- /dev/null
+++ b/tests/rest/admin/test_user.py
@@ -0,0 +1,465 @@
+# -*- coding: utf-8 -*-
+# Copyright 2018 New Vector Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import hashlib
+import hmac
+import json
+
+from mock import Mock
+
+import synapse.rest.admin
+from synapse.api.constants import UserTypes
+from synapse.rest.client.v1 import login
+
+from tests import unittest
+
+
+class UserRegisterTestCase(unittest.HomeserverTestCase):
+
+ servlets = [synapse.rest.admin.register_servlets_for_client_rest_resource]
+
+ def make_homeserver(self, reactor, clock):
+
+ self.url = "/_matrix/client/r0/admin/register"
+
+ self.registration_handler = Mock()
+ self.identity_handler = Mock()
+ self.login_handler = Mock()
+ self.device_handler = Mock()
+ self.device_handler.check_device_registered = Mock(return_value="FAKE")
+
+ self.datastore = Mock(return_value=Mock())
+ self.datastore.get_current_state_deltas = Mock(return_value=(0, []))
+
+ self.secrets = Mock()
+
+ self.hs = self.setup_test_homeserver()
+
+ self.hs.config.registration_shared_secret = "shared"
+
+ self.hs.get_media_repository = Mock()
+ self.hs.get_deactivate_account_handler = Mock()
+
+ return self.hs
+
+ def test_disabled(self):
+ """
+ If there is no shared secret, registration through this method will be
+ prevented.
+ """
+ self.hs.config.registration_shared_secret = None
+
+ request, channel = self.make_request("POST", self.url, b"{}")
+ self.render(request)
+
+ self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual(
+ "Shared secret registration is not enabled", channel.json_body["error"]
+ )
+
+ def test_get_nonce(self):
+ """
+ Calling GET on the endpoint will return a randomised nonce, using the
+ homeserver's secrets provider.
+ """
+ secrets = Mock()
+ secrets.token_hex = Mock(return_value="abcd")
+
+ self.hs.get_secrets = Mock(return_value=secrets)
+
+ request, channel = self.make_request("GET", self.url)
+ self.render(request)
+
+ self.assertEqual(channel.json_body, {"nonce": "abcd"})
+
+ def test_expired_nonce(self):
+ """
+ Calling GET on the endpoint will return a randomised nonce, which will
+ only last for SALT_TIMEOUT (60s).
+ """
+ request, channel = self.make_request("GET", self.url)
+ self.render(request)
+ nonce = channel.json_body["nonce"]
+
+ # 59 seconds
+ self.reactor.advance(59)
+
+ body = json.dumps({"nonce": nonce})
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("username must be specified", channel.json_body["error"])
+
+ # 61 seconds
+ self.reactor.advance(2)
+
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("unrecognised nonce", channel.json_body["error"])
+
+ def test_register_incorrect_nonce(self):
+ """
+ Only the provided nonce can be used, as it's checked in the MAC.
+ """
+ request, channel = self.make_request("GET", self.url)
+ self.render(request)
+ nonce = channel.json_body["nonce"]
+
+ want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
+ want_mac.update(b"notthenonce\x00bob\x00abc123\x00admin")
+ want_mac = want_mac.hexdigest()
+
+ body = json.dumps(
+ {
+ "nonce": nonce,
+ "username": "bob",
+ "password": "abc123",
+ "admin": True,
+ "mac": want_mac,
+ }
+ )
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("HMAC incorrect", channel.json_body["error"])
+
+ def test_register_correct_nonce(self):
+ """
+ When the correct nonce is provided, and the right key is provided, the
+ user is registered.
+ """
+ request, channel = self.make_request("GET", self.url)
+ self.render(request)
+ nonce = channel.json_body["nonce"]
+
+ want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
+ want_mac.update(
+ nonce.encode("ascii") + b"\x00bob\x00abc123\x00admin\x00support"
+ )
+ want_mac = want_mac.hexdigest()
+
+ body = json.dumps(
+ {
+ "nonce": nonce,
+ "username": "bob",
+ "password": "abc123",
+ "admin": True,
+ "user_type": UserTypes.SUPPORT,
+ "mac": want_mac,
+ }
+ )
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("@bob:test", channel.json_body["user_id"])
+
+ def test_nonce_reuse(self):
+ """
+ A valid unrecognised nonce.
+ """
+ request, channel = self.make_request("GET", self.url)
+ self.render(request)
+ nonce = channel.json_body["nonce"]
+
+ want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
+ want_mac.update(nonce.encode("ascii") + b"\x00bob\x00abc123\x00admin")
+ want_mac = want_mac.hexdigest()
+
+ body = json.dumps(
+ {
+ "nonce": nonce,
+ "username": "bob",
+ "password": "abc123",
+ "admin": True,
+ "mac": want_mac,
+ }
+ )
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("@bob:test", channel.json_body["user_id"])
+
+ # Now, try and reuse it
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("unrecognised nonce", channel.json_body["error"])
+
+ def test_missing_parts(self):
+ """
+ Synapse will complain if you don't give nonce, username, password, and
+ mac. Admin and user_types are optional. Additional checks are done for length
+ and type.
+ """
+
+ def nonce():
+ request, channel = self.make_request("GET", self.url)
+ self.render(request)
+ return channel.json_body["nonce"]
+
+ #
+ # Nonce check
+ #
+
+ # Must be present
+ body = json.dumps({})
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("nonce must be specified", channel.json_body["error"])
+
+ #
+ # Username checks
+ #
+
+ # Must be present
+ body = json.dumps({"nonce": nonce()})
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("username must be specified", channel.json_body["error"])
+
+ # Must be a string
+ body = json.dumps({"nonce": nonce(), "username": 1234})
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("Invalid username", channel.json_body["error"])
+
+ # Must not have null bytes
+ body = json.dumps({"nonce": nonce(), "username": "abcd\u0000"})
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("Invalid username", channel.json_body["error"])
+
+ # Must not have null bytes
+ body = json.dumps({"nonce": nonce(), "username": "a" * 1000})
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("Invalid username", channel.json_body["error"])
+
+ #
+ # Password checks
+ #
+
+ # Must be present
+ body = json.dumps({"nonce": nonce(), "username": "a"})
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("password must be specified", channel.json_body["error"])
+
+ # Must be a string
+ body = json.dumps({"nonce": nonce(), "username": "a", "password": 1234})
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("Invalid password", channel.json_body["error"])
+
+ # Must not have null bytes
+ body = json.dumps({"nonce": nonce(), "username": "a", "password": "abcd\u0000"})
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("Invalid password", channel.json_body["error"])
+
+ # Super long
+ body = json.dumps({"nonce": nonce(), "username": "a", "password": "A" * 1000})
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("Invalid password", channel.json_body["error"])
+
+ #
+ # user_type check
+ #
+
+ # Invalid user_type
+ body = json.dumps(
+ {
+ "nonce": nonce(),
+ "username": "a",
+ "password": "1234",
+ "user_type": "invalid",
+ }
+ )
+ request, channel = self.make_request("POST", self.url, body.encode("utf8"))
+ self.render(request)
+
+ self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("Invalid user type", channel.json_body["error"])
+
+
+class UsersListTestCase(unittest.HomeserverTestCase):
+
+ servlets = [
+ synapse.rest.admin.register_servlets,
+ login.register_servlets,
+ ]
+ url = "/_synapse/admin/v2/users"
+
+ def prepare(self, reactor, clock, hs):
+ self.admin_user = self.register_user("admin", "pass", admin=True)
+ self.admin_user_tok = self.login("admin", "pass")
+
+ self.register_user("user1", "pass1", admin=False)
+ self.register_user("user2", "pass2", admin=False)
+
+ def test_no_auth(self):
+ """
+ Try to list users without authentication.
+ """
+ request, channel = self.make_request("GET", self.url, b"{}")
+ self.render(request)
+
+ self.assertEqual(401, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("M_MISSING_TOKEN", channel.json_body["errcode"])
+
+ def test_all_users(self):
+ """
+ List all users, including deactivated users.
+ """
+ request, channel = self.make_request(
+ "GET",
+ self.url + "?deactivated=true",
+ b"{}",
+ access_token=self.admin_user_tok,
+ )
+ self.render(request)
+
+ self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual(3, len(channel.json_body["users"]))
+
+
+class UserRestTestCase(unittest.HomeserverTestCase):
+
+ servlets = [
+ synapse.rest.admin.register_servlets,
+ login.register_servlets,
+ ]
+
+ def prepare(self, reactor, clock, hs):
+ self.store = hs.get_datastore()
+
+ self.url = "/_synapse/admin/v2/users/@bob:test"
+
+ self.admin_user = self.register_user("admin", "pass", admin=True)
+ self.admin_user_tok = self.login("admin", "pass")
+
+ self.other_user = self.register_user("user", "pass")
+ self.other_user_token = self.login("user", "pass")
+
+ def test_requester_is_no_admin(self):
+ """
+ If the user is not a server admin, an error is returned.
+ """
+ self.hs.config.registration_shared_secret = None
+
+ request, channel = self.make_request(
+ "GET", self.url, access_token=self.other_user_token,
+ )
+ self.render(request)
+
+ self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("You are not a server admin", channel.json_body["error"])
+
+ request, channel = self.make_request(
+ "PUT", self.url, access_token=self.other_user_token, content=b"{}",
+ )
+ self.render(request)
+
+ self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("You are not a server admin", channel.json_body["error"])
+
+ def test_requester_is_admin(self):
+ """
+ If the user is a server admin, a new user is created.
+ """
+ self.hs.config.registration_shared_secret = None
+
+ body = json.dumps({"password": "abc123", "admin": True})
+
+ # Create user
+ request, channel = self.make_request(
+ "PUT",
+ self.url,
+ access_token=self.admin_user_tok,
+ content=body.encode(encoding="utf_8"),
+ )
+ self.render(request)
+
+ self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("@bob:test", channel.json_body["name"])
+ self.assertEqual("bob", channel.json_body["displayname"])
+
+ # Get user
+ request, channel = self.make_request(
+ "GET", self.url, access_token=self.admin_user_tok,
+ )
+ self.render(request)
+
+ self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("@bob:test", channel.json_body["name"])
+ self.assertEqual("bob", channel.json_body["displayname"])
+ self.assertEqual(1, channel.json_body["admin"])
+ self.assertEqual(0, channel.json_body["is_guest"])
+ self.assertEqual(0, channel.json_body["deactivated"])
+
+ # Modify user
+ body = json.dumps({"displayname": "foobar", "deactivated": True})
+
+ request, channel = self.make_request(
+ "PUT",
+ self.url,
+ access_token=self.admin_user_tok,
+ content=body.encode(encoding="utf_8"),
+ )
+ self.render(request)
+
+ self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("@bob:test", channel.json_body["name"])
+ self.assertEqual("foobar", channel.json_body["displayname"])
+ self.assertEqual(True, channel.json_body["deactivated"])
+
+ # Get user
+ request, channel = self.make_request(
+ "GET", self.url, access_token=self.admin_user_tok,
+ )
+ self.render(request)
+
+ self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
+ self.assertEqual("@bob:test", channel.json_body["name"])
+ self.assertEqual("foobar", channel.json_body["displayname"])
+ self.assertEqual(1, channel.json_body["admin"])
+ self.assertEqual(0, channel.json_body["is_guest"])
+ self.assertEqual(1, channel.json_body["deactivated"])
diff --git a/tests/storage/test_registration.py b/tests/storage/test_registration.py
index ed5786865a..71a40a0a49 100644
--- a/tests/storage/test_registration.py
+++ b/tests/storage/test_registration.py
@@ -43,12 +43,14 @@ class RegistrationStoreTestCase(unittest.TestCase):
# TODO(paul): Surely this field should be 'user_id', not 'name'
"name": self.user_id,
"password_hash": self.pwhash,
+ "admin": 0,
"is_guest": 0,
"consent_version": None,
"consent_server_notice_sent": None,
"appservice_id": None,
"creation_ts": 1000,
"user_type": None,
+ "deactivated": 0,
},
(yield self.store.get_user_by_id(self.user_id)),
)
|