diff options
author | Patrick Cloke <patrickc@matrix.org> | 2022-10-04 12:16:51 -0400 |
---|---|---|
committer | Patrick Cloke <patrickc@matrix.org> | 2022-10-04 12:16:51 -0400 |
commit | 3bba1a80c40a68af68df52826356b581a22e7d3d (patch) | |
tree | da4820b2ab158221dece5d89553892f044f49528 | |
parent | Disable things which don't support workers. (diff) | |
parent | Use threaded receipts when fetching events for push. (#13878) (diff) | |
download | synapse-3bba1a80c40a68af68df52826356b581a22e7d3d.tar.xz |
Merge remote-tracking branch 'origin/develop' into clokep/rel-workers
114 files changed, 941 insertions, 287 deletions
diff --git a/CHANGES.md b/CHANGES.md index fbb57f0e04..60961bf38d 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -1,3 +1,108 @@ +Synapse 1.69.0rc1 (2022-10-04) +============================== + + +Please note that legacy Prometheus metric names are now deprecated and will be removed in Synapse 1.73.0. +Server administrators should update their dashboards and alerting rules to avoid using the deprecated metric names. +See the [upgrade notes](https://matrix-org.github.io/synapse/v1.69/upgrade.html#upgrading-to-v1690) for more details. + + +Features +-------- + +- Allow application services to set the `origin_server_ts` of a state event by providing the query parameter `ts` in [`PUT /_matrix/client/r0/rooms/{roomId}/state/{eventType}/{stateKey}`](https://spec.matrix.org/v1.4/client-server-api/#put_matrixclientv3roomsroomidstateeventtypestatekey), per [MSC3316](https://github.com/matrix-org/matrix-doc/pull/3316). Contributed by @lukasdenk. ([\#11866](https://github.com/matrix-org/synapse/issues/11866)) +- Allow server admins to require a manual approval process before new accounts can be used (using [MSC3866](https://github.com/matrix-org/matrix-spec-proposals/pull/3866)). ([\#13556](https://github.com/matrix-org/synapse/issues/13556)) +- Exponentially backoff from backfilling the same event over and over. ([\#13635](https://github.com/matrix-org/synapse/issues/13635), [\#13936](https://github.com/matrix-org/synapse/issues/13936)) +- Add cache invalidation across workers to module API. ([\#13667](https://github.com/matrix-org/synapse/issues/13667), [\#13947](https://github.com/matrix-org/synapse/issues/13947)) +- Experimental implementation of [MSC3882](https://github.com/matrix-org/matrix-spec-proposals/pull/3882) to allow an existing device/session to generate a login token for use on a new device/session. ([\#13722](https://github.com/matrix-org/synapse/issues/13722), [\#13868](https://github.com/matrix-org/synapse/issues/13868)) +- Experimental support for thread-specific receipts ([MSC3771](https://github.com/matrix-org/matrix-spec-proposals/pull/3771)). ([\#13782](https://github.com/matrix-org/synapse/issues/13782), [\#13893](https://github.com/matrix-org/synapse/issues/13893), [\#13932](https://github.com/matrix-org/synapse/issues/13932), [\#13937](https://github.com/matrix-org/synapse/issues/13937), [\#13939](https://github.com/matrix-org/synapse/issues/13939)) +- Add experimental support for [MSC3881: Remotely toggle push notifications for another client](https://github.com/matrix-org/matrix-spec-proposals/pull/3881). ([\#13799](https://github.com/matrix-org/synapse/issues/13799), [\#13831](https://github.com/matrix-org/synapse/issues/13831), [\#13860](https://github.com/matrix-org/synapse/issues/13860)) +- Keep track when an event pulled over federation fails its signature check so we can intelligently back-off in the future. ([\#13815](https://github.com/matrix-org/synapse/issues/13815)) +- Improve validation for the unspecced, internal-only `_matrix/client/unstable/add_threepid/msisdn/submit_token` endpoint. ([\#13832](https://github.com/matrix-org/synapse/issues/13832)) +- Faster remote room joins: record _when_ we first partial-join to a room. ([\#13892](https://github.com/matrix-org/synapse/issues/13892)) +- Support a `dir` parameter on the `/relations` endpoint per [MSC3715](https://github.com/matrix-org/matrix-doc/pull/3715). ([\#13920](https://github.com/matrix-org/synapse/issues/13920)) +- Ask mail servers receiving emails from Synapse to not send automatic replies (e.g. out-of-office responses). ([\#13957](https://github.com/matrix-org/synapse/issues/13957)) + + +Bugfixes +-------- + +- Send push notifications for invites received over federation. ([\#13719](https://github.com/matrix-org/synapse/issues/13719), [\#14014](https://github.com/matrix-org/synapse/issues/14014)) +- Fix a long-standing bug where typing events would be accepted from remote servers not present in a room. Also fix a bug where incoming typing events would cause other incoming events to get stuck during a fast join. ([\#13830](https://github.com/matrix-org/synapse/issues/13830)) +- Fix a bug introduced in Synapse v1.53.0 where the experimental implementation of [MSC3715](https://github.com/matrix-org/matrix-spec-proposals/pull/3715) would give incorrect results when paginating forward. ([\#13840](https://github.com/matrix-org/synapse/issues/13840)) +- Fix access token leak to logs from proxy agent. ([\#13855](https://github.com/matrix-org/synapse/issues/13855)) +- Fix `have_seen_event` cache not being invalidated after we persist an event which causes inefficiency effects like extra `/state` federation calls. ([\#13863](https://github.com/matrix-org/synapse/issues/13863)) +- Faster room joins: Fix a bug introduced in 1.66.0 where an error would be logged when syncing after joining a room. ([\#13872](https://github.com/matrix-org/synapse/issues/13872)) +- Fix a bug introduced in 1.66.0 where some required fields in the pushrules sent to clients were not present anymore. Contributed by Nico. ([\#13904](https://github.com/matrix-org/synapse/issues/13904)) +- Fix packaging to include `Cargo.lock` in `sdist`. ([\#13909](https://github.com/matrix-org/synapse/issues/13909)) +- Fix a long-standing bug where device updates could cause delays sending out to-device messages over federation. ([\#13922](https://github.com/matrix-org/synapse/issues/13922)) +- Fix a bug introduced in v1.68.0 where Synapse would require `setuptools_rust` at runtime, even though the package is only required at build time. ([\#13952](https://github.com/matrix-org/synapse/issues/13952)) +- Fix a long-standing bug where `POST /_matrix/client/v3/keys/query` requests could result in excessively large SQL queries. ([\#13956](https://github.com/matrix-org/synapse/issues/13956)) +- Fix a performance regression in the `get_users_in_room` database query. Introduced in v1.67.0. ([\#13972](https://github.com/matrix-org/synapse/issues/13972)) +- Fix a bug introduced in v1.68.0 bug where Rust extension wasn't built in `release` mode when using `poetry install`. ([\#14009](https://github.com/matrix-org/synapse/issues/14009)) +- Do not return an unspecified `original_event` field when using the stable `/relations` endpoint. Introduced in Synapse v1.57.0. ([\#14025](https://github.com/matrix-org/synapse/issues/14025)) +- Correctly handle a race with device lists when a remote user leaves during a partial join. ([\#13885](https://github.com/matrix-org/synapse/issues/13885)) +- Correctly handle sending local device list updates to remote servers during a partial join. ([\#13934](https://github.com/matrix-org/synapse/issues/13934)) + + +Improved Documentation +---------------------- + +- Add `worker_main_http_uri` for the worker generator bash script. ([\#13772](https://github.com/matrix-org/synapse/issues/13772)) +- Update URL for the NixOS module for Synapse. ([\#13818](https://github.com/matrix-org/synapse/issues/13818)) +- Fix a mistake in sso_mapping_providers.md: `map_user_attributes` is expected to return `display_name`, not `displayname`. ([\#13836](https://github.com/matrix-org/synapse/issues/13836)) +- Fix a cross-link from the registration admin API to the `registration_shared_secret` configuration documentation. ([\#13870](https://github.com/matrix-org/synapse/issues/13870)) +- Update the man page for the `hash_password` script to correct the default number of bcrypt rounds performed. ([\#13911](https://github.com/matrix-org/synapse/issues/13911), [\#13930](https://github.com/matrix-org/synapse/issues/13930)) +- Emphasize the right reasons when to use `(room_id, event_id)` in a database schema. ([\#13915](https://github.com/matrix-org/synapse/issues/13915)) +- Add instruction to contributing guide for running unit tests in parallel. Contributed by @ashfame. ([\#13928](https://github.com/matrix-org/synapse/issues/13928)) +- Clarify that the `auto_join_rooms` config option can also be used with Space aliases. ([\#13931](https://github.com/matrix-org/synapse/issues/13931)) +- Add some cross references to worker documentation. ([\#13974](https://github.com/matrix-org/synapse/issues/13974)) +- Linkify urls in config documentation. ([\#14003](https://github.com/matrix-org/synapse/issues/14003)) + + +Deprecations and Removals +------------------------- + +- Remove the `complete_sso_login` method from the Module API which was deprecated in Synapse 1.13.0. ([\#13843](https://github.com/matrix-org/synapse/issues/13843)) +- Announce that legacy metric names are deprecated, will be turned off by default in Synapse v1.71.0 and removed altogether in Synapse v1.73.0. See the upgrade notes for more information. ([\#14024](https://github.com/matrix-org/synapse/issues/14024)) + + +Internal Changes +---------------- + +- Speed up creation of DM rooms. ([\#13487](https://github.com/matrix-org/synapse/issues/13487), [\#13800](https://github.com/matrix-org/synapse/issues/13800)) +- Port push rules to using Rust. ([\#13768](https://github.com/matrix-org/synapse/issues/13768), [\#13838](https://github.com/matrix-org/synapse/issues/13838), [\#13889](https://github.com/matrix-org/synapse/issues/13889)) +- Optimise get rooms for user calls. Contributed by Nick @ Beeper (@fizzadar). ([\#13787](https://github.com/matrix-org/synapse/issues/13787)) +- Update the script which makes full schema dumps. ([\#13792](https://github.com/matrix-org/synapse/issues/13792)) +- Use shared methods for cache invalidation when persisting events, remove duplicate codepaths. Contributed by Nick @ Beeper (@fizzadar). ([\#13796](https://github.com/matrix-org/synapse/issues/13796)) +- Improve the `synapse.api.auth.Auth` mock used in unit tests. ([\#13809](https://github.com/matrix-org/synapse/issues/13809)) +- Faster Remote Room Joins: tell remote homeservers that we are unable to authorise them if they query a room which has partial state on our server. ([\#13823](https://github.com/matrix-org/synapse/issues/13823)) +- Carry IdP Session IDs through user-mapping sessions. ([\#13839](https://github.com/matrix-org/synapse/issues/13839)) +- Fix the release script not publishing binary wheels. ([\#13850](https://github.com/matrix-org/synapse/issues/13850)) +- Raise issue if complement fails with latest deps. ([\#13859](https://github.com/matrix-org/synapse/issues/13859)) +- Correct the comments in the complement dockerfile. ([\#13867](https://github.com/matrix-org/synapse/issues/13867)) +- Create a new snapshot of the database schema. ([\#13873](https://github.com/matrix-org/synapse/issues/13873)) +- Faster room joins: Send device list updates to most servers in rooms with partial state. ([\#13874](https://github.com/matrix-org/synapse/issues/13874), [\#14013](https://github.com/matrix-org/synapse/issues/14013)) +- Add comments to the Prometheus recording rules to make it clear which set of rules you need for Grafana or Prometheus Console. ([\#13876](https://github.com/matrix-org/synapse/issues/13876)) +- Only pull relevant backfill points from the database based on the current depth and limit (instead of all) every time we want to `/backfill`. ([\#13879](https://github.com/matrix-org/synapse/issues/13879)) +- Faster room joins: Avoid waiting for full state when processing `/keys/changes` requests. ([\#13888](https://github.com/matrix-org/synapse/issues/13888)) +- Improve backfill robustness by trying more servers when we get a `4xx` error back. ([\#13890](https://github.com/matrix-org/synapse/issues/13890)) +- Fix mypy errors with canonicaljson 1.6.3. ([\#13905](https://github.com/matrix-org/synapse/issues/13905)) +- Faster remote room joins: correctly handle remote device list updates during a partial join. ([\#13913](https://github.com/matrix-org/synapse/issues/13913)) +- Complement image: propagate SIGTERM to all workers. ([\#13914](https://github.com/matrix-org/synapse/issues/13914)) +- Update an innaccurate comment in Synapse's upsert database helper. ([\#13924](https://github.com/matrix-org/synapse/issues/13924)) +- Update mypy (0.950 -> 0.981) and mypy-zope (0.3.7 -> 0.3.11). ([\#13925](https://github.com/matrix-org/synapse/issues/13925), [\#13993](https://github.com/matrix-org/synapse/issues/13993)) +- Use dedicated `get_local_users_in_room(room_id)` function to find local users when calculating users to copy over during a room upgrade. ([\#13960](https://github.com/matrix-org/synapse/issues/13960)) +- Refactor language in user directory `_track_user_joined_room` code to make it more clear that we use both local and remote users. ([\#13966](https://github.com/matrix-org/synapse/issues/13966)) +- Revert catch-all exceptions being recorded as event pull attempt failures (only handle what we know about). ([\#13969](https://github.com/matrix-org/synapse/issues/13969)) +- Speed up calculating push actions in large rooms. ([\#13973](https://github.com/matrix-org/synapse/issues/13973), [\#13992](https://github.com/matrix-org/synapse/issues/13992)) +- Enable update notifications from Github's dependabot. ([\#13976](https://github.com/matrix-org/synapse/issues/13976)) +- Prototype a workflow to automatically add changelogs to dependabot PRs. ([\#13998](https://github.com/matrix-org/synapse/issues/13998), [\#14011](https://github.com/matrix-org/synapse/issues/14011), [\#14017](https://github.com/matrix-org/synapse/issues/14017), [\#14021](https://github.com/matrix-org/synapse/issues/14021), [\#14027](https://github.com/matrix-org/synapse/issues/14027)) +- Fix type annotations to be compatible with new annotations in development versions of twisted. ([\#14012](https://github.com/matrix-org/synapse/issues/14012)) +- Clear out stale entries in `event_push_actions_staging` table. ([\#14020](https://github.com/matrix-org/synapse/issues/14020)) +- Bump versions of GitHub actions. ([\#13978](https://github.com/matrix-org/synapse/issues/13978), [\#13979](https://github.com/matrix-org/synapse/issues/13979), [\#13980](https://github.com/matrix-org/synapse/issues/13980), [\#13982](https://github.com/matrix-org/synapse/issues/13982), [\#14015](https://github.com/matrix-org/synapse/issues/14015), [\#14019](https://github.com/matrix-org/synapse/issues/14019), [\#14022](https://github.com/matrix-org/synapse/issues/14022), [\#14023](https://github.com/matrix-org/synapse/issues/14023)) + + Synapse 1.68.0 (2022-09-27) =========================== diff --git a/changelog.d/11866.feature b/changelog.d/11866.feature deleted file mode 100644 index 0b52caf805..0000000000 --- a/changelog.d/11866.feature +++ /dev/null @@ -1 +0,0 @@ -Allow application services to set the `origin_server_ts` of a state event by providing the query parameter `ts` in `PUT /_matrix/client/r0/rooms/{roomId}/state/{eventType}/{stateKey}`, per [MSC3316](https://github.com/matrix-org/matrix-doc/pull/3316). Contributed by @lukasdenk. \ No newline at end of file diff --git a/changelog.d/13487.misc b/changelog.d/13487.misc deleted file mode 100644 index 761adc8b05..0000000000 --- a/changelog.d/13487.misc +++ /dev/null @@ -1 +0,0 @@ -Speed up creation of DM rooms. diff --git a/changelog.d/13556.feature b/changelog.d/13556.feature deleted file mode 100644 index f9d63db6c0..0000000000 --- a/changelog.d/13556.feature +++ /dev/null @@ -1 +0,0 @@ -Allow server admins to require a manual approval process before new accounts can be used (using [MSC3866](https://github.com/matrix-org/matrix-spec-proposals/pull/3866)). diff --git a/changelog.d/13635.feature b/changelog.d/13635.feature deleted file mode 100644 index d86bf7ed80..0000000000 --- a/changelog.d/13635.feature +++ /dev/null @@ -1 +0,0 @@ -Exponentially backoff from backfilling the same event over and over. diff --git a/changelog.d/13667.feature b/changelog.d/13667.feature deleted file mode 100644 index a0b3cfe18c..0000000000 --- a/changelog.d/13667.feature +++ /dev/null @@ -1 +0,0 @@ -Add cache invalidation across workers to module API. diff --git a/changelog.d/13719.bugfix b/changelog.d/13719.bugfix deleted file mode 100644 index 4318f4daff..0000000000 --- a/changelog.d/13719.bugfix +++ /dev/null @@ -1 +0,0 @@ -Send invite push notifications for invite over federation. diff --git a/changelog.d/13722.feature b/changelog.d/13722.feature deleted file mode 100644 index 588d143c0f..0000000000 --- a/changelog.d/13722.feature +++ /dev/null @@ -1 +0,0 @@ -Experimental implementation of MSC3882 to allow an existing device/session to generate a login token for use on a new device/session. diff --git a/changelog.d/13768.misc b/changelog.d/13768.misc deleted file mode 100644 index 28bddb7059..0000000000 --- a/changelog.d/13768.misc +++ /dev/null @@ -1 +0,0 @@ -Port push rules to using Rust. diff --git a/changelog.d/13772.doc b/changelog.d/13772.doc deleted file mode 100644 index 3398ff3765..0000000000 --- a/changelog.d/13772.doc +++ /dev/null @@ -1 +0,0 @@ -Add `worker_main_http_uri` for the worker generator bash script. diff --git a/changelog.d/13787.misc b/changelog.d/13787.misc deleted file mode 100644 index a9b93717f0..0000000000 --- a/changelog.d/13787.misc +++ /dev/null @@ -1 +0,0 @@ -Optimise get rooms for user calls. Contributed by Nick @ Beeper (@fizzadar). diff --git a/changelog.d/13792.misc b/changelog.d/13792.misc deleted file mode 100644 index 36ac91400a..0000000000 --- a/changelog.d/13792.misc +++ /dev/null @@ -1 +0,0 @@ -Update the script which makes full schema dumps. diff --git a/changelog.d/13796.misc b/changelog.d/13796.misc deleted file mode 100644 index 9ed1662394..0000000000 --- a/changelog.d/13796.misc +++ /dev/null @@ -1 +0,0 @@ -Use shared methods for cache invalidation when persisting events, remove duplicate codepaths. Contributed by Nick @ Beeper (@fizzadar). diff --git a/changelog.d/13799.feature b/changelog.d/13799.feature deleted file mode 100644 index 6c8e5cffe2..0000000000 --- a/changelog.d/13799.feature +++ /dev/null @@ -1 +0,0 @@ -Add experimental support for [MSC3881: Remotely toggle push notifications for another client](https://github.com/matrix-org/matrix-spec-proposals/pull/3881). diff --git a/changelog.d/13800.misc b/changelog.d/13800.misc deleted file mode 100644 index 761adc8b05..0000000000 --- a/changelog.d/13800.misc +++ /dev/null @@ -1 +0,0 @@ -Speed up creation of DM rooms. diff --git a/changelog.d/13809.misc b/changelog.d/13809.misc deleted file mode 100644 index c2dacca2f2..0000000000 --- a/changelog.d/13809.misc +++ /dev/null @@ -1 +0,0 @@ -Improve the `synapse.api.auth.Auth` mock used in unit tests. diff --git a/changelog.d/13815.feature b/changelog.d/13815.feature deleted file mode 100644 index ba411f5067..0000000000 --- a/changelog.d/13815.feature +++ /dev/null @@ -1 +0,0 @@ -Keep track when an event pulled over federation fails its signature check so we can intelligently back-off in the future. diff --git a/changelog.d/13818.doc b/changelog.d/13818.doc deleted file mode 100644 index 16b31f5071..0000000000 --- a/changelog.d/13818.doc +++ /dev/null @@ -1 +0,0 @@ -Update URL for the NixOS module for Synapse. diff --git a/changelog.d/13823.misc b/changelog.d/13823.misc deleted file mode 100644 index 527d79f4b2..0000000000 --- a/changelog.d/13823.misc +++ /dev/null @@ -1 +0,0 @@ -Faster Remote Room Joins: tell remote homeservers that we are unable to authorise them if they query a room which has partial state on our server. \ No newline at end of file diff --git a/changelog.d/13782.feature b/changelog.d/13824.feature index d0cb902dff..d0cb902dff 100644 --- a/changelog.d/13782.feature +++ b/changelog.d/13824.feature diff --git a/changelog.d/13830.bugfix b/changelog.d/13830.bugfix deleted file mode 100644 index e6215806cd..0000000000 --- a/changelog.d/13830.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fix a long-standing bug where typing events would be accepted from remote servers not present in a room. Also fix a bug where incoming typing events would cause other incoming events to get stuck during a fast join. diff --git a/changelog.d/13831.feature b/changelog.d/13831.feature deleted file mode 100644 index 6c8e5cffe2..0000000000 --- a/changelog.d/13831.feature +++ /dev/null @@ -1 +0,0 @@ -Add experimental support for [MSC3881: Remotely toggle push notifications for another client](https://github.com/matrix-org/matrix-spec-proposals/pull/3881). diff --git a/changelog.d/13832.feature b/changelog.d/13832.feature deleted file mode 100644 index 1dc1d66efe..0000000000 --- a/changelog.d/13832.feature +++ /dev/null @@ -1 +0,0 @@ -Improve validation for the unspecced, internal-only `_matrix/client/unstable/add_threepid/msisdn/submit_token` endpoint. diff --git a/changelog.d/13836.doc b/changelog.d/13836.doc deleted file mode 100644 index f2edab00f4..0000000000 --- a/changelog.d/13836.doc +++ /dev/null @@ -1 +0,0 @@ -Fix a mistake in sso_mapping_providers.md: `map_user_attributes` is expected to return `display_name` not `displayname`. diff --git a/changelog.d/13838.misc b/changelog.d/13838.misc deleted file mode 100644 index 28bddb7059..0000000000 --- a/changelog.d/13838.misc +++ /dev/null @@ -1 +0,0 @@ -Port push rules to using Rust. diff --git a/changelog.d/13839.misc b/changelog.d/13839.misc deleted file mode 100644 index 549872c90f..0000000000 --- a/changelog.d/13839.misc +++ /dev/null @@ -1 +0,0 @@ -Carry IdP Session IDs through user-mapping sessions. diff --git a/changelog.d/13840.bugfix b/changelog.d/13840.bugfix deleted file mode 100644 index 0f014439a8..0000000000 --- a/changelog.d/13840.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fix a bug introduced in Synapse v1.53.0 where the experimental implementation of [MSC3715](https://github.com/matrix-org/matrix-spec-proposals/pull/3715) would give incorrect results when paginating forward. diff --git a/changelog.d/13843.removal b/changelog.d/13843.removal deleted file mode 100644 index f6caaa8895..0000000000 --- a/changelog.d/13843.removal +++ /dev/null @@ -1 +0,0 @@ -Remove the `complete_sso_login` method from the Module API which was deprecated in Synapse 1.13.0. diff --git a/changelog.d/13850.misc b/changelog.d/13850.misc deleted file mode 100644 index a973118aaf..0000000000 --- a/changelog.d/13850.misc +++ /dev/null @@ -1 +0,0 @@ -Fix the release script not publishing binary wheels. \ No newline at end of file diff --git a/changelog.d/13855.bugfix b/changelog.d/13855.bugfix deleted file mode 100644 index 5ea8539bd8..0000000000 --- a/changelog.d/13855.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fix access token leak to logs from proxy agent. diff --git a/changelog.d/13859.misc b/changelog.d/13859.misc deleted file mode 100644 index 2780a4af3c..0000000000 --- a/changelog.d/13859.misc +++ /dev/null @@ -1 +0,0 @@ -Raise issue if complement fails with latest deps. diff --git a/changelog.d/13860.feature b/changelog.d/13860.feature deleted file mode 100644 index 6c8e5cffe2..0000000000 --- a/changelog.d/13860.feature +++ /dev/null @@ -1 +0,0 @@ -Add experimental support for [MSC3881: Remotely toggle push notifications for another client](https://github.com/matrix-org/matrix-spec-proposals/pull/3881). diff --git a/changelog.d/13863.bugfix b/changelog.d/13863.bugfix deleted file mode 100644 index 74264a4fab..0000000000 --- a/changelog.d/13863.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fix `have_seen_event` cache not being invalidated after we persist an event which causes inefficiency effects like extra `/state` federation calls. diff --git a/changelog.d/13867.misc b/changelog.d/13867.misc deleted file mode 100644 index 1205214598..0000000000 --- a/changelog.d/13867.misc +++ /dev/null @@ -1 +0,0 @@ -Correct the comments in the complement dockerfile. diff --git a/changelog.d/13868.misc b/changelog.d/13868.misc deleted file mode 100644 index d7a99c042a..0000000000 --- a/changelog.d/13868.misc +++ /dev/null @@ -1 +0,0 @@ -Fix unstable MSC3882 endpoint being incorrectly available on stable API versions. \ No newline at end of file diff --git a/changelog.d/13870.doc b/changelog.d/13870.doc deleted file mode 100644 index 2598bc270c..0000000000 --- a/changelog.d/13870.doc +++ /dev/null @@ -1 +0,0 @@ -Fix a cross-link from the register admin API to the `registration_shared_secret` configuration documentation. diff --git a/changelog.d/13872.bugfix b/changelog.d/13872.bugfix deleted file mode 100644 index 67d3d9e643..0000000000 --- a/changelog.d/13872.bugfix +++ /dev/null @@ -1 +0,0 @@ -Faster room joins: Fix a bug introduced in 1.66.0 where an error would be logged when syncing after joining a room. diff --git a/changelog.d/13873.misc b/changelog.d/13873.misc deleted file mode 100644 index f4342482f0..0000000000 --- a/changelog.d/13873.misc +++ /dev/null @@ -1 +0,0 @@ -Create a new snapshot of the database schema. diff --git a/changelog.d/13874.misc b/changelog.d/13874.misc deleted file mode 100644 index 499e488c35..0000000000 --- a/changelog.d/13874.misc +++ /dev/null @@ -1 +0,0 @@ -Faster room joins: Send device list updates to most servers in rooms with partial state. diff --git a/changelog.d/13876.misc b/changelog.d/13876.misc deleted file mode 100644 index ef37100115..0000000000 --- a/changelog.d/13876.misc +++ /dev/null @@ -1 +0,0 @@ -Add comments to the Prometheus recording rules to make it clear which set of rules you need for Grafana or Prometheus Console. \ No newline at end of file diff --git a/changelog.d/13893.feature b/changelog.d/13877.feature index d0cb902dff..d0cb902dff 100644 --- a/changelog.d/13893.feature +++ b/changelog.d/13877.feature diff --git a/changelog.d/13932.feature b/changelog.d/13878.feature index d0cb902dff..d0cb902dff 100644 --- a/changelog.d/13932.feature +++ b/changelog.d/13878.feature diff --git a/changelog.d/13879.misc b/changelog.d/13879.misc deleted file mode 100644 index 3cc2a2420f..0000000000 --- a/changelog.d/13879.misc +++ /dev/null @@ -1 +0,0 @@ -Only pull relevant backfill points from the database based on the current depth and limit (instead of all) every time we want to `/backfill`. diff --git a/changelog.d/13885.misc b/changelog.d/13885.misc deleted file mode 100644 index bc76b862df..0000000000 --- a/changelog.d/13885.misc +++ /dev/null @@ -1 +0,0 @@ -Correctly handle a race with device lists when a remote user leaves during a partial join. diff --git a/changelog.d/13888.misc b/changelog.d/13888.misc deleted file mode 100644 index 4ffd9bcede..0000000000 --- a/changelog.d/13888.misc +++ /dev/null @@ -1 +0,0 @@ -Faster room joins: Avoid waiting for full state when processing `/keys/changes` requests. diff --git a/changelog.d/13889.misc b/changelog.d/13889.misc deleted file mode 100644 index 28bddb7059..0000000000 --- a/changelog.d/13889.misc +++ /dev/null @@ -1 +0,0 @@ -Port push rules to using Rust. diff --git a/changelog.d/13890.misc b/changelog.d/13890.misc deleted file mode 100644 index bf76cf7be7..0000000000 --- a/changelog.d/13890.misc +++ /dev/null @@ -1 +0,0 @@ -Improve backfill robustness by trying more servers when we get a `4xx` error back. \ No newline at end of file diff --git a/changelog.d/13892.feature b/changelog.d/13892.feature deleted file mode 100644 index df3f576536..0000000000 --- a/changelog.d/13892.feature +++ /dev/null @@ -1 +0,0 @@ -Faster remote room joins: record _when_ we first partial-join to a room. diff --git a/changelog.d/13904.bugfix b/changelog.d/13904.bugfix deleted file mode 100644 index 397a3108ac..0000000000 --- a/changelog.d/13904.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fix a bug introduced in 1.66 where some required fields in the pushrules sent to clients were not present anymore. Contributed by Nico. diff --git a/changelog.d/13905.misc b/changelog.d/13905.misc deleted file mode 100644 index efe3bed5f1..0000000000 --- a/changelog.d/13905.misc +++ /dev/null @@ -1 +0,0 @@ -Fix mypy errors with canonicaljson 1.6.3. diff --git a/changelog.d/13909.bugfix b/changelog.d/13909.bugfix deleted file mode 100644 index 883dd72919..0000000000 --- a/changelog.d/13909.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fix packaging to include `Cargo.lock` in `sdist`. diff --git a/changelog.d/13911.doc b/changelog.d/13911.doc deleted file mode 100644 index 7cc3206501..0000000000 --- a/changelog.d/13911.doc +++ /dev/null @@ -1 +0,0 @@ -Update the man page for the `hash_password` script to correct the default number of bcrypt rounds performed. \ No newline at end of file diff --git a/changelog.d/13913.misc b/changelog.d/13913.misc deleted file mode 100644 index 30b4401049..0000000000 --- a/changelog.d/13913.misc +++ /dev/null @@ -1 +0,0 @@ -Faster remote room joins: correctly handle remote device list updates during a partial join. diff --git a/changelog.d/13914.misc b/changelog.d/13914.misc deleted file mode 100644 index c29bc25d38..0000000000 --- a/changelog.d/13914.misc +++ /dev/null @@ -1 +0,0 @@ -Complement image: propagate SIGTERM to all workers. diff --git a/changelog.d/13915.doc b/changelog.d/13915.doc deleted file mode 100644 index 828cc30536..0000000000 --- a/changelog.d/13915.doc +++ /dev/null @@ -1 +0,0 @@ -Emphasize the right reasons when to use `(room_id, event_id)` in a database schema. diff --git a/changelog.d/13920.feature b/changelog.d/13920.feature deleted file mode 100644 index aee702bcd2..0000000000 --- a/changelog.d/13920.feature +++ /dev/null @@ -1 +0,0 @@ -Support a `dir` parameter on the `/relations` endpoint per [MSC3715](https://github.com/matrix-org/matrix-doc/pull/3715). diff --git a/changelog.d/13922.bugfix b/changelog.d/13922.bugfix deleted file mode 100644 index 7269d28dee..0000000000 --- a/changelog.d/13922.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fix long-standing bug where device updates could cause delays sending out to-device messages over federation. diff --git a/changelog.d/13924.misc b/changelog.d/13924.misc deleted file mode 100644 index 7770b6f03f..0000000000 --- a/changelog.d/13924.misc +++ /dev/null @@ -1 +0,0 @@ -Update an innaccurate comment in Synapse's upsert database helper. diff --git a/changelog.d/13925.misc b/changelog.d/13925.misc deleted file mode 100644 index f490ab122e..0000000000 --- a/changelog.d/13925.misc +++ /dev/null @@ -1 +0,0 @@ -Update mypy (0.950 -> 0.981) and mypy-zope (0.3.7 -> 0.3.11). diff --git a/changelog.d/13928.doc b/changelog.d/13928.doc deleted file mode 100644 index 04cd06f19d..0000000000 --- a/changelog.d/13928.doc +++ /dev/null @@ -1 +0,0 @@ -Add instruction to contributing guide for running unit tests in parallel. Contributed by @ashfame. diff --git a/changelog.d/13930.doc b/changelog.d/13930.doc deleted file mode 100644 index 7cc3206501..0000000000 --- a/changelog.d/13930.doc +++ /dev/null @@ -1 +0,0 @@ -Update the man page for the `hash_password` script to correct the default number of bcrypt rounds performed. \ No newline at end of file diff --git a/changelog.d/13931.doc b/changelog.d/13931.doc deleted file mode 100644 index 85e74fbb3b..0000000000 --- a/changelog.d/13931.doc +++ /dev/null @@ -1 +0,0 @@ -Clarify that the `auto_join_rooms` config option can also be used with Space aliases. \ No newline at end of file diff --git a/changelog.d/13934.misc b/changelog.d/13934.misc deleted file mode 100644 index 6610a9f567..0000000000 --- a/changelog.d/13934.misc +++ /dev/null @@ -1 +0,0 @@ -Correctly handle sending local device list updates to remote servers during a partial join. diff --git a/changelog.d/13936.feature b/changelog.d/13936.feature deleted file mode 100644 index d86bf7ed80..0000000000 --- a/changelog.d/13936.feature +++ /dev/null @@ -1 +0,0 @@ -Exponentially backoff from backfilling the same event over and over. diff --git a/changelog.d/13937.feature b/changelog.d/13937.feature deleted file mode 100644 index d0cb902dff..0000000000 --- a/changelog.d/13937.feature +++ /dev/null @@ -1 +0,0 @@ -Experimental support for thread-specific receipts ([MSC3771](https://github.com/matrix-org/matrix-spec-proposals/pull/3771)). diff --git a/changelog.d/13939.feature b/changelog.d/13939.feature deleted file mode 100644 index d0cb902dff..0000000000 --- a/changelog.d/13939.feature +++ /dev/null @@ -1 +0,0 @@ -Experimental support for thread-specific receipts ([MSC3771](https://github.com/matrix-org/matrix-spec-proposals/pull/3771)). diff --git a/changelog.d/13947.feature b/changelog.d/13947.feature deleted file mode 100644 index a0b3cfe18c..0000000000 --- a/changelog.d/13947.feature +++ /dev/null @@ -1 +0,0 @@ -Add cache invalidation across workers to module API. diff --git a/changelog.d/13952.bugfix b/changelog.d/13952.bugfix deleted file mode 100644 index a6af20f051..0000000000 --- a/changelog.d/13952.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fix a bug introduced in v1.68.0 where Synapse would require `setuptools_rust` at runtime, even though the package is only required at build time. diff --git a/changelog.d/13956.bugfix b/changelog.d/13956.bugfix deleted file mode 100644 index 5682c3e002..0000000000 --- a/changelog.d/13956.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fix a long-standing bug where `POST /_matrix/client/v3/keys/query` requests could result in excessively large SQL queries. diff --git a/changelog.d/13957.feature b/changelog.d/13957.feature deleted file mode 100644 index 4080147357..0000000000 --- a/changelog.d/13957.feature +++ /dev/null @@ -1 +0,0 @@ -Ask mail servers receiving emails from Synapse to not send automatic reply (e.g. out-of-office responses). diff --git a/changelog.d/13960.misc b/changelog.d/13960.misc deleted file mode 100644 index a7ba532bcb..0000000000 --- a/changelog.d/13960.misc +++ /dev/null @@ -1 +0,0 @@ -Use dedicated `get_local_users_in_room(room_id)` function to find local users when calculating users to copy over during a room upgrade. diff --git a/changelog.d/13966.misc b/changelog.d/13966.misc deleted file mode 100644 index b54ad5c776..0000000000 --- a/changelog.d/13966.misc +++ /dev/null @@ -1 +0,0 @@ -Refactor language in user directory `_track_user_joined_room` code to make it more clear that we use both local and remote users. diff --git a/changelog.d/13969.misc b/changelog.d/13969.misc deleted file mode 100644 index 5ede0069c8..0000000000 --- a/changelog.d/13969.misc +++ /dev/null @@ -1 +0,0 @@ -Revert catch-all exceptions being recorded as event pull attempt failures (only handle what we know about). diff --git a/changelog.d/13972.bugfix b/changelog.d/13972.bugfix deleted file mode 100644 index 4c1e19ef8c..0000000000 --- a/changelog.d/13972.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fix a performance regression in the `get_users_in_room` database query. Introduced in v1.67.0. diff --git a/changelog.d/13973.misc b/changelog.d/13973.misc deleted file mode 100644 index 58150a2b35..0000000000 --- a/changelog.d/13973.misc +++ /dev/null @@ -1 +0,0 @@ -Speed up calculating push actions in large rooms. diff --git a/changelog.d/13974.doc b/changelog.d/13974.doc deleted file mode 100644 index c4ab17db53..0000000000 --- a/changelog.d/13974.doc +++ /dev/null @@ -1 +0,0 @@ -Add some cross references to worker documentation. diff --git a/changelog.d/13976.misc b/changelog.d/13976.misc deleted file mode 100644 index c235d3a4ac..0000000000 --- a/changelog.d/13976.misc +++ /dev/null @@ -1 +0,0 @@ -Enable update notifications from Github's dependabot. diff --git a/changelog.d/13978.misc b/changelog.d/13978.misc deleted file mode 100644 index c958b52078..0000000000 --- a/changelog.d/13978.misc +++ /dev/null @@ -1 +0,0 @@ -Bump docker/login-action from 1 to 2. diff --git a/changelog.d/13979.misc b/changelog.d/13979.misc deleted file mode 100644 index 9012a47828..0000000000 --- a/changelog.d/13979.misc +++ /dev/null @@ -1 +0,0 @@ -Bump actions/download-artifact from 2 to 3. diff --git a/changelog.d/13980.misc b/changelog.d/13980.misc deleted file mode 100644 index ef9fde568a..0000000000 --- a/changelog.d/13980.misc +++ /dev/null @@ -1 +0,0 @@ -Bump actions/cache from 2 to 3. diff --git a/changelog.d/13982.misc b/changelog.d/13982.misc deleted file mode 100644 index 40c73e96ec..0000000000 --- a/changelog.d/13982.misc +++ /dev/null @@ -1 +0,0 @@ -Bump actions/checkout from 2 to 3. diff --git a/changelog.d/13991.misc b/changelog.d/13991.misc new file mode 100644 index 0000000000..f425fb17b2 --- /dev/null +++ b/changelog.d/13991.misc @@ -0,0 +1 @@ +Optimise queries used to get a users rooms during sync. Contributed by Nick @ Beeper (@fizzadar). diff --git a/changelog.d/13992.misc b/changelog.d/13992.misc deleted file mode 100644 index 58150a2b35..0000000000 --- a/changelog.d/13992.misc +++ /dev/null @@ -1 +0,0 @@ -Speed up calculating push actions in large rooms. diff --git a/changelog.d/13993.misc b/changelog.d/13993.misc deleted file mode 100644 index f490ab122e..0000000000 --- a/changelog.d/13993.misc +++ /dev/null @@ -1 +0,0 @@ -Update mypy (0.950 -> 0.981) and mypy-zope (0.3.7 -> 0.3.11). diff --git a/changelog.d/13998.misc b/changelog.d/13998.misc deleted file mode 100644 index 7d793b56e0..0000000000 --- a/changelog.d/13998.misc +++ /dev/null @@ -1 +0,0 @@ -Prototype a workflow to automatically add changelogs to dependabot PRs. diff --git a/changelog.d/14003.doc b/changelog.d/14003.doc deleted file mode 100644 index 81d1be9d43..0000000000 --- a/changelog.d/14003.doc +++ /dev/null @@ -1 +0,0 @@ -Linkify urls in config documentation. diff --git a/changelog.d/14006.misc b/changelog.d/14006.misc new file mode 100644 index 0000000000..c06dcadf02 --- /dev/null +++ b/changelog.d/14006.misc @@ -0,0 +1 @@ +Update authlib from 0.15.5 to 1.1.0. diff --git a/changelog.d/14009.bugfix b/changelog.d/14009.bugfix deleted file mode 100644 index 5f85fee967..0000000000 --- a/changelog.d/14009.bugfix +++ /dev/null @@ -1 +0,0 @@ -Fix bug where Rust extension wasn't built in `release` mode when using `poetry install`. diff --git a/changelog.d/14011.misc b/changelog.d/14011.misc deleted file mode 100644 index 7d793b56e0..0000000000 --- a/changelog.d/14011.misc +++ /dev/null @@ -1 +0,0 @@ -Prototype a workflow to automatically add changelogs to dependabot PRs. diff --git a/changelog.d/14012.misc b/changelog.d/14012.misc deleted file mode 100644 index 9888dc6cc1..0000000000 --- a/changelog.d/14012.misc +++ /dev/null @@ -1 +0,0 @@ -Fix type annotations to be compatible with new annotations in development versions of twisted. diff --git a/changelog.d/14013.misc b/changelog.d/14013.misc deleted file mode 100644 index 499e488c35..0000000000 --- a/changelog.d/14013.misc +++ /dev/null @@ -1 +0,0 @@ -Faster room joins: Send device list updates to most servers in rooms with partial state. diff --git a/changelog.d/14014.bugfix b/changelog.d/14014.bugfix deleted file mode 100644 index 4318f4daff..0000000000 --- a/changelog.d/14014.bugfix +++ /dev/null @@ -1 +0,0 @@ -Send invite push notifications for invite over federation. diff --git a/changelog.d/14015.misc b/changelog.d/14015.misc deleted file mode 100644 index 5a602e4592..0000000000 --- a/changelog.d/14015.misc +++ /dev/null @@ -1 +0,0 @@ -Bump docker/setup-buildx-action from 1 to 2. diff --git a/changelog.d/14017.misc b/changelog.d/14017.misc deleted file mode 100644 index 7d793b56e0..0000000000 --- a/changelog.d/14017.misc +++ /dev/null @@ -1 +0,0 @@ -Prototype a workflow to automatically add changelogs to dependabot PRs. diff --git a/changelog.d/14019.misc b/changelog.d/14019.misc deleted file mode 100644 index cc9c7ae181..0000000000 --- a/changelog.d/14019.misc +++ /dev/null @@ -1 +0,0 @@ -Bump docker/setup-qemu-action from 1 to 2. diff --git a/changelog.d/14020.misc b/changelog.d/14020.misc deleted file mode 100644 index 85550b307d..0000000000 --- a/changelog.d/14020.misc +++ /dev/null @@ -1 +0,0 @@ -Clear out stale entries in `event_push_actions_staging` table. diff --git a/changelog.d/14021.misc b/changelog.d/14021.misc deleted file mode 100644 index 7d793b56e0..0000000000 --- a/changelog.d/14021.misc +++ /dev/null @@ -1 +0,0 @@ -Prototype a workflow to automatically add changelogs to dependabot PRs. diff --git a/changelog.d/14022.misc b/changelog.d/14022.misc deleted file mode 100644 index be49a034bb..0000000000 --- a/changelog.d/14022.misc +++ /dev/null @@ -1 +0,0 @@ -Bump docker/build-push-action from 2 to 3. diff --git a/changelog.d/14023.misc b/changelog.d/14023.misc deleted file mode 100644 index b5ce5898c2..0000000000 --- a/changelog.d/14023.misc +++ /dev/null @@ -1 +0,0 @@ -Bump actions/upload-artifact from 2 to 3. diff --git a/changelog.d/14024.removal b/changelog.d/14024.removal deleted file mode 100644 index 9b83cb3927..0000000000 --- a/changelog.d/14024.removal +++ /dev/null @@ -1 +0,0 @@ -Announce that legacy metric names are deprecated, will be turned off by default in Synapse v1.71.0 and removed altogether in Synapse v1.73.0. See the upgrade notes for more information. \ No newline at end of file diff --git a/changelog.d/14025.bugfix b/changelog.d/14025.bugfix deleted file mode 100644 index 391364f44d..0000000000 --- a/changelog.d/14025.bugfix +++ /dev/null @@ -1 +0,0 @@ -Do not return an unspecified `original_event` field when using the stable `/relations` endpoint. Introduced in Synapse v1.57.0. diff --git a/changelog.d/14027.misc b/changelog.d/14027.misc deleted file mode 100644 index 7d793b56e0..0000000000 --- a/changelog.d/14027.misc +++ /dev/null @@ -1 +0,0 @@ -Prototype a workflow to automatically add changelogs to dependabot PRs. diff --git a/changelog.d/14041.misc b/changelog.d/14041.misc new file mode 100644 index 0000000000..a2119627f8 --- /dev/null +++ b/changelog.d/14041.misc @@ -0,0 +1 @@ +Bump types-pyyaml from 6.0.4 to 6.0.12. diff --git a/debian/changelog b/debian/changelog index 01fa49aa05..0f4dd28081 100644 --- a/debian/changelog +++ b/debian/changelog @@ -1,9 +1,10 @@ -matrix-synapse-py3 (1.69.0~rc1+nmu1) UNRELEASED; urgency=medium +matrix-synapse-py3 (1.69.0~rc1) stable; urgency=medium * The man page for the hash_password script has been updated to reflect the correct default value of 'bcrypt_rounds'. + * New Synapse release 1.69.0rc1. - -- Synapse Packaging team <packages@matrix.org> Mon, 26 Sep 2022 18:05:09 +0100 + -- Synapse Packaging team <packages@matrix.org> Tue, 04 Oct 2022 11:17:16 +0100 matrix-synapse-py3 (1.68.0) stable; urgency=medium diff --git a/poetry.lock b/poetry.lock index 1b21112644..f43fedd4a7 100644 --- a/poetry.lock +++ b/poetry.lock @@ -13,18 +13,15 @@ tests = ["cloudpickle", "coverage[toml] (>=5.0.2)", "hypothesis", "mypy", "pympl tests_no_zope = ["cloudpickle", "coverage[toml] (>=5.0.2)", "hypothesis", "mypy", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "six"] [[package]] -name = "authlib" -version = "0.15.5" -description = "The ultimate Python library in building OAuth and OpenID Connect servers." +name = "Authlib" +version = "1.1.0" +description = "The ultimate Python library in building OAuth and OpenID Connect servers and clients." category = "main" optional = true python-versions = "*" [package.dependencies] -cryptography = "*" - -[package.extras] -client = ["requests"] +cryptography = ">=3.2" [[package]] name = "automat" @@ -1463,8 +1460,8 @@ python-versions = "*" types-cryptography = "*" [[package]] -name = "types-pyyaml" -version = "6.0.4" +name = "types-PyYAML" +version = "6.0.12" description = "Typing stubs for PyYAML" category = "dev" optional = false @@ -1643,9 +1640,9 @@ attrs = [ {file = "attrs-21.4.0-py2.py3-none-any.whl", hash = "sha256:2d27e3784d7a565d36ab851fe94887c5eccd6a463168875832a1be79c82828b4"}, {file = "attrs-21.4.0.tar.gz", hash = "sha256:626ba8234211db98e869df76230a137c4c40a12d72445c45d5f5b716f076e2fd"}, ] -authlib = [ - {file = "Authlib-0.15.5-py2.py3-none-any.whl", hash = "sha256:ecf4a7a9f2508c0bb07e93a752dd3c495cfaffc20e864ef0ffc95e3f40d2abaf"}, - {file = "Authlib-0.15.5.tar.gz", hash = "sha256:b83cf6360c8e92b0e9df0d1f32d675790bcc4e3c03977499b1eed24dcdef4252"}, +Authlib = [ + {file = "Authlib-1.1.0-py2.py3-none-any.whl", hash = "sha256:be4b6a1dea51122336c210a6945b27a105b9ac572baffd15b07bcff4376c1523"}, + {file = "Authlib-1.1.0.tar.gz", hash = "sha256:0a270c91409fc2b7b0fbee6996e09f2ee3187358762111a9a4225c874b94e891"}, ] automat = [ {file = "Automat-20.2.0-py2.py3-none-any.whl", hash = "sha256:b6feb6455337df834f6c9962d6ccf771515b7d939bca142b29c20c2376bc6111"}, @@ -2755,9 +2752,9 @@ types-pyOpenSSL = [ {file = "types-pyOpenSSL-22.0.10.tar.gz", hash = "sha256:f943b834f5b97e5e808764c2f6e37be1a2e226c46792296f61558196acfcc3a1"}, {file = "types_pyOpenSSL-22.0.10-py3-none-any.whl", hash = "sha256:63baea211768bea580a769ac5c0d637ae8cd3150314aadc5726ca22e4c4f241a"}, ] -types-pyyaml = [ - {file = "types-PyYAML-6.0.4.tar.gz", hash = "sha256:6252f62d785e730e454dfa0c9f0fb99d8dae254c5c3c686903cf878ea27c04b7"}, - {file = "types_PyYAML-6.0.4-py3-none-any.whl", hash = "sha256:693b01c713464a6851f36ff41077f8adbc6e355eda929addfb4a97208aea9b4b"}, +types-PyYAML = [ + {file = "types-PyYAML-6.0.12.tar.gz", hash = "sha256:f6f350418125872f3f0409d96a62a5a5ceb45231af5cc07ee0034ec48a3c82fa"}, + {file = "types_PyYAML-6.0.12-py3-none-any.whl", hash = "sha256:29228db9f82df4f1b7febee06bbfb601677882e98a3da98132e31c6874163e15"}, ] types-requests = [ {file = "types-requests-2.28.11.tar.gz", hash = "sha256:7ee827eb8ce611b02b5117cfec5da6455365b6a575f5e3ff19f655ba603e6b4e"}, diff --git a/pyproject.toml b/pyproject.toml index c1d74b0d5f..121f25f3a2 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -57,7 +57,7 @@ manifest-path = "rust/Cargo.toml" [tool.poetry] name = "matrix-synapse" -version = "1.68.0" +version = "1.69.0rc1" description = "Homeserver for the Matrix decentralised comms protocol" authors = ["Matrix.org Team and Contributors <packages@matrix.org>"] license = "Apache-2.0" diff --git a/synapse/handlers/sync.py b/synapse/handlers/sync.py index 329e89c604..0f684857ca 100644 --- a/synapse/handlers/sync.py +++ b/synapse/handlers/sync.py @@ -1317,6 +1317,19 @@ class SyncHandler: At the end, we transfer data from the `sync_result_builder` to a new `SyncResult` instance to signify that the sync calculation is complete. """ + + user_id = sync_config.user.to_string() + app_service = self.store.get_app_service_by_user_id(user_id) + if app_service: + # We no longer support AS users using /sync directly. + # See https://github.com/matrix-org/matrix-doc/issues/1144 + raise NotImplementedError() + + # Note: we get the users room list *before* we get the current token, this + # avoids checking back in history if rooms are joined after the token is fetched. + token_before_rooms = self.event_sources.get_current_token() + mutable_joined_room_ids = set(await self.store.get_rooms_for_user(user_id)) + # NB: The now_token gets changed by some of the generate_sync_* methods, # this is due to some of the underlying streams not supporting the ability # to query up to a given point. @@ -1324,6 +1337,57 @@ class SyncHandler: now_token = self.event_sources.get_current_token() log_kv({"now_token": now_token}) + # Since we fetched the users room list before the token, there's a small window + # during which membership events may have been persisted, so we fetch these now + # and modify the joined room list for any changes between the get_rooms_for_user + # call and the get_current_token call. + membership_change_events = [] + if since_token: + membership_change_events = await self.store.get_membership_changes_for_user( + user_id, since_token.room_key, now_token.room_key, self.rooms_to_exclude + ) + + mem_last_change_by_room_id: Dict[str, EventBase] = {} + for event in membership_change_events: + mem_last_change_by_room_id[event.room_id] = event + + # For the latest membership event in each room found, add/remove the room ID + # from the joined room list accordingly. In this case we only care if the + # latest change is JOIN. + + for room_id, event in mem_last_change_by_room_id.items(): + assert event.internal_metadata.stream_ordering + if ( + event.internal_metadata.stream_ordering + < token_before_rooms.room_key.stream + ): + continue + + logger.info( + "User membership change between getting rooms and current token: %s %s %s", + user_id, + event.membership, + room_id, + ) + # User joined a room - we have to then check the room state to ensure we + # respect any bans if there's a race between the join and ban events. + if event.membership == Membership.JOIN: + user_ids_in_room = await self.store.get_users_in_room(room_id) + if user_id in user_ids_in_room: + mutable_joined_room_ids.add(room_id) + # The user left the room, or left and was re-invited but not joined yet + else: + mutable_joined_room_ids.discard(room_id) + + # Now we have our list of joined room IDs, exclude as configured and freeze + joined_room_ids = frozenset( + ( + room_id + for room_id in mutable_joined_room_ids + if room_id not in self.rooms_to_exclude + ) + ) + logger.debug( "Calculating sync response for %r between %s and %s", sync_config.user, @@ -1331,22 +1395,13 @@ class SyncHandler: now_token, ) - user_id = sync_config.user.to_string() - app_service = self.store.get_app_service_by_user_id(user_id) - if app_service: - # We no longer support AS users using /sync directly. - # See https://github.com/matrix-org/matrix-doc/issues/1144 - raise NotImplementedError() - else: - joined_room_ids = await self.get_rooms_for_user_at( - user_id, now_token.room_key - ) sync_result_builder = SyncResultBuilder( sync_config, full_state, since_token=since_token, now_token=now_token, joined_room_ids=joined_room_ids, + membership_change_events=membership_change_events, ) logger.debug("Fetching account data") @@ -1827,19 +1882,12 @@ class SyncHandler: Does not modify the `sync_result_builder`. """ - user_id = sync_result_builder.sync_config.user.to_string() since_token = sync_result_builder.since_token - now_token = sync_result_builder.now_token + membership_change_events = sync_result_builder.membership_change_events assert since_token - # Get a list of membership change events that have happened to the user - # requesting the sync. - membership_changes = await self.store.get_membership_changes_for_user( - user_id, since_token.room_key, now_token.room_key - ) - - if membership_changes: + if membership_change_events: return True stream_id = since_token.room_key.stream @@ -1878,16 +1926,10 @@ class SyncHandler: since_token = sync_result_builder.since_token now_token = sync_result_builder.now_token sync_config = sync_result_builder.sync_config + membership_change_events = sync_result_builder.membership_change_events assert since_token - # TODO: we've already called this function and ran this query in - # _have_rooms_changed. We could keep the results in memory to avoid a - # second query, at the cost of more complicated source code. - membership_change_events = await self.store.get_membership_changes_for_user( - user_id, since_token.room_key, now_token.room_key, self.rooms_to_exclude - ) - mem_change_events_by_room_id: Dict[str, List[EventBase]] = {} for event in membership_change_events: mem_change_events_by_room_id.setdefault(event.room_id, []).append(event) @@ -2415,60 +2457,6 @@ class SyncHandler: else: raise Exception("Unrecognized rtype: %r", room_builder.rtype) - async def get_rooms_for_user_at( - self, - user_id: str, - room_key: RoomStreamToken, - ) -> FrozenSet[str]: - """Get set of joined rooms for a user at the given stream ordering. - - The stream ordering *must* be recent, otherwise this may throw an - exception if older than a month. (This function is called with the - current token, which should be perfectly fine). - - Args: - user_id - stream_ordering - - ReturnValue: - Set of room_ids the user is in at given stream_ordering. - """ - joined_rooms = await self.store.get_rooms_for_user_with_stream_ordering(user_id) - - joined_room_ids = set() - - # We need to check that the stream ordering of the join for each room - # is before the stream_ordering asked for. This might not be the case - # if the user joins a room between us getting the current token and - # calling `get_rooms_for_user_with_stream_ordering`. - # If the membership's stream ordering is after the given stream - # ordering, we need to go and work out if the user was in the room - # before. - # We also need to check whether the room should be excluded from sync - # responses as per the homeserver config. - for joined_room in joined_rooms: - if joined_room.room_id in self.rooms_to_exclude: - continue - - if not joined_room.event_pos.persisted_after(room_key): - joined_room_ids.add(joined_room.room_id) - continue - - logger.info("User joined room after current token: %s", joined_room.room_id) - - extrems = ( - await self.store.get_forward_extremities_for_room_at_stream_ordering( - joined_room.room_id, joined_room.event_pos.stream - ) - ) - user_ids_in_room = await self.state.get_current_user_ids_in_room( - joined_room.room_id, extrems - ) - if user_id in user_ids_in_room: - joined_room_ids.add(joined_room.room_id) - - return frozenset(joined_room_ids) - def _action_has_highlight(actions: List[JsonDict]) -> bool: for action in actions: @@ -2565,6 +2553,7 @@ class SyncResultBuilder: since_token: Optional[StreamToken] now_token: StreamToken joined_room_ids: FrozenSet[str] + membership_change_events: List[EventBase] presence: List[UserPresenceState] = attr.Factory(list) account_data: List[JsonDict] = attr.Factory(list) diff --git a/synapse/push/bulk_push_rule_evaluator.py b/synapse/push/bulk_push_rule_evaluator.py index 61d952742d..f8c4dd74f0 100644 --- a/synapse/push/bulk_push_rule_evaluator.py +++ b/synapse/push/bulk_push_rule_evaluator.py @@ -286,8 +286,13 @@ class BulkPushRuleEvaluator: relation.parent_id, itertools.chain(*(r.rules() for r in rules_by_user.values())), ) + # Recursively attempt to find the thread this event relates to. if relation.rel_type == RelationTypes.THREAD: thread_id = relation.parent_id + else: + # Since the event has not yet been persisted we check whether + # the parent is part of a thread. + thread_id = await self.store.get_thread_id(relation.parent_id) or "main" evaluator = PushRuleEvaluator( _flatten_dict(event), diff --git a/synapse/rest/client/receipts.py b/synapse/rest/client/receipts.py index f3ff156abe..287dfdd69e 100644 --- a/synapse/rest/client/receipts.py +++ b/synapse/rest/client/receipts.py @@ -16,7 +16,7 @@ import logging from typing import TYPE_CHECKING, Tuple from synapse.api.constants import ReceiptTypes -from synapse.api.errors import SynapseError +from synapse.api.errors import Codes, SynapseError from synapse.http.server import HttpServer from synapse.http.servlet import RestServlet, parse_json_object_from_request from synapse.http.site import SynapseRequest @@ -43,6 +43,7 @@ class ReceiptRestServlet(RestServlet): self.receipts_handler = hs.get_receipts_handler() self.read_marker_handler = hs.get_read_marker_handler() self.presence_handler = hs.get_presence_handler() + self._main_store = hs.get_datastores().main self._known_receipt_types = { ReceiptTypes.READ, @@ -71,7 +72,24 @@ class ReceiptRestServlet(RestServlet): thread_id = body.get("thread_id") if not thread_id or not isinstance(thread_id, str): raise SynapseError( - 400, "thread_id field must be a non-empty string" + 400, + "thread_id field must be a non-empty string", + Codes.INVALID_PARAM, + ) + + if receipt_type == ReceiptTypes.FULLY_READ: + raise SynapseError( + 400, + f"thread_id is not compatible with {ReceiptTypes.FULLY_READ} receipts.", + Codes.INVALID_PARAM, + ) + + # Ensure the event ID roughly correlates to the thread ID. + if thread_id != await self._main_store.get_thread_id(event_id): + raise SynapseError( + 400, + f"event_id {event_id} is not related to thread {thread_id}", + Codes.INVALID_PARAM, ) await self.presence_handler.bump_presence_active_time(requester.user) diff --git a/synapse/storage/databases/main/event_push_actions.py b/synapse/storage/databases/main/event_push_actions.py index 3210e9cca1..332e13d1c9 100644 --- a/synapse/storage/databases/main/event_push_actions.py +++ b/synapse/storage/databases/main/event_push_actions.py @@ -119,6 +119,32 @@ DEFAULT_HIGHLIGHT_ACTION: List[Union[dict, str]] = [ ] +@attr.s(slots=True, auto_attribs=True) +class _RoomReceipt: + """ + HttpPushAction instances include the information used to generate HTTP + requests to a push gateway. + """ + + unthreaded_stream_ordering: int = 0 + # threaded_stream_ordering includes the main pseudo-thread. + threaded_stream_ordering: Dict[str, int] = attr.Factory(dict) + + def is_unread(self, thread_id: str, stream_ordering: int) -> bool: + """Returns True if the stream ordering is unread according to the receipt information.""" + + # Only include push actions with a stream ordering after both the unthreaded + # and threaded receipt. Properly handles a user without any receipts present. + return ( + self.unthreaded_stream_ordering < stream_ordering + and self.threaded_stream_ordering.get(thread_id, 0) < stream_ordering + ) + + +# A _RoomReceipt with no receipts in it. +MISSING_ROOM_RECEIPT = _RoomReceipt() + + @attr.s(slots=True, frozen=True, auto_attribs=True) class HttpPushAction: """ @@ -421,7 +447,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas txn: LoggingTransaction, room_id: str, user_id: str, - receipt_stream_ordering: int, + unthreaded_receipt_stream_ordering: int, ) -> RoomNotifCounts: """Get the number of unread messages for a user/room that have happened since the given stream ordering. @@ -430,9 +456,9 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas txn: The database transaction. room_id: The room ID to get unread counts for. user_id: The user ID to get unread counts for. - receipt_stream_ordering: The stream ordering of the user's latest - receipt in the room. If there are no receipts, the stream ordering - of the user's join event. + unthreaded_receipt_stream_ordering: The stream ordering of the user's latest + unthreaded receipt in the room. If there are no unthreaded receipts, + the stream ordering of the user's join event. Returns: A RoomNotifCounts object containing the notification count, the @@ -448,71 +474,181 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas return main_counts return thread_counts.setdefault(thread_id, NotifCounts()) + receipt_types_clause, receipts_args = make_in_list_sql_clause( + self.database_engine, + "receipt_type", + (ReceiptTypes.READ, ReceiptTypes.READ_PRIVATE), + ) + # First we pull the counts from the summary table. # - # We check that `last_receipt_stream_ordering` matches the stream - # ordering given. If it doesn't match then a new read receipt has arrived and - # we haven't yet updated the counts in `event_push_summary` to reflect - # that; in that case we simply ignore `event_push_summary` counts - # and do a manual count of all of the rows in the `event_push_actions` table - # for this user/room. + # We check that `last_receipt_stream_ordering` matches the stream ordering of the + # latest receipt for the thread (which may be either the unthreaded read receipt + # or the threaded read receipt). + # + # If it doesn't match then a new read receipt has arrived and we haven't yet + # updated the counts in `event_push_summary` to reflect that; in that case we + # simply ignore `event_push_summary` counts. + # + # We then do a manual count of all the rows in the `event_push_actions` table + # for any user/room/thread which did not have a valid summary found. # - # If `last_receipt_stream_ordering` is null then that means it's up to - # date (as the row was written by an older version of Synapse that + # If `last_receipt_stream_ordering` is null then that means it's up-to-date + # (as the row was written by an older version of Synapse that # updated `event_push_summary` synchronously when persisting a new read # receipt). txn.execute( - """ - SELECT stream_ordering, notif_count, COALESCE(unread_count, 0), thread_id + f""" + SELECT notif_count, COALESCE(unread_count, 0), thread_id FROM event_push_summary + LEFT JOIN ( + SELECT thread_id, MAX(stream_ordering) AS threaded_receipt_stream_ordering + FROM receipts_linearized + LEFT JOIN events USING (room_id, event_id) + WHERE + user_id = ? + AND room_id = ? + AND stream_ordering > ? + AND {receipt_types_clause} + GROUP BY thread_id + ) AS receipts USING (thread_id) WHERE room_id = ? AND user_id = ? AND ( - (last_receipt_stream_ordering IS NULL AND stream_ordering > ?) - OR last_receipt_stream_ordering = ? + (last_receipt_stream_ordering IS NULL AND stream_ordering > COALESCE(threaded_receipt_stream_ordering, ?)) + OR last_receipt_stream_ordering = COALESCE(threaded_receipt_stream_ordering, ?) ) AND (notif_count != 0 OR COALESCE(unread_count, 0) != 0) """, - (room_id, user_id, receipt_stream_ordering, receipt_stream_ordering), + ( + user_id, + room_id, + unthreaded_receipt_stream_ordering, + *receipts_args, + room_id, + user_id, + unthreaded_receipt_stream_ordering, + unthreaded_receipt_stream_ordering, + ), ) - max_summary_stream_ordering = 0 - for summary_stream_ordering, notif_count, unread_count, thread_id in txn: + summarised_threads = set() + for notif_count, unread_count, thread_id in txn: + summarised_threads.add(thread_id) counts = _get_thread(thread_id) counts.notify_count += notif_count counts.unread_count += unread_count - # Summaries will only be used if they have not been invalidated by - # a recent receipt; track the latest stream ordering or a valid summary. - # - # Note that since there's only one read receipt in the room per user, - # valid summaries are contiguous. - max_summary_stream_ordering = max( - summary_stream_ordering, max_summary_stream_ordering - ) - # Next we need to count highlights, which aren't summarised - sql = """ + sql = f""" SELECT COUNT(*), thread_id FROM event_push_actions + LEFT JOIN ( + SELECT thread_id, MAX(stream_ordering) AS threaded_receipt_stream_ordering + FROM receipts_linearized + LEFT JOIN events USING (room_id, event_id) + WHERE + user_id = ? + AND room_id = ? + AND stream_ordering > ? + AND {receipt_types_clause} + GROUP BY thread_id + ) AS receipts USING (thread_id) WHERE user_id = ? AND room_id = ? - AND stream_ordering > ? + AND stream_ordering > COALESCE(threaded_receipt_stream_ordering, ?) AND highlight = 1 GROUP BY thread_id """ - txn.execute(sql, (user_id, room_id, receipt_stream_ordering)) + txn.execute( + sql, + ( + user_id, + room_id, + unthreaded_receipt_stream_ordering, + *receipts_args, + user_id, + room_id, + unthreaded_receipt_stream_ordering, + ), + ) for highlight_count, thread_id in txn: _get_thread(thread_id).highlight_count += highlight_count + # For threads which were summarised we need to count actions since the last + # rotation. + thread_id_clause, thread_id_args = make_in_list_sql_clause( + self.database_engine, "thread_id", summarised_threads + ) + + # The (inclusive) event stream ordering that was previously summarised. + rotated_upto_stream_ordering = self.db_pool.simple_select_one_onecol_txn( + txn, + table="event_push_summary_stream_ordering", + keyvalues={}, + retcol="stream_ordering", + ) + + unread_counts = self._get_notif_unread_count_for_user_room( + txn, room_id, user_id, rotated_upto_stream_ordering + ) + for notif_count, unread_count, thread_id in unread_counts: + if thread_id not in summarised_threads: + continue + + if thread_id == MAIN_TIMELINE: + counts.notify_count += notif_count + counts.unread_count += unread_count + elif thread_id in thread_counts: + thread_counts[thread_id].notify_count += notif_count + thread_counts[thread_id].unread_count += unread_count + else: + # Previous thread summaries of 0 are discarded above. + # + # TODO If empty summaries are deleted this can be removed. + thread_counts[thread_id] = NotifCounts( + notify_count=notif_count, + unread_count=unread_count, + highlight_count=0, + ) + # Finally we need to count push actions that aren't included in the # summary returned above. This might be due to recent events that haven't # been summarised yet or the summary is out of date due to a recent read # receipt. - start_unread_stream_ordering = max( - receipt_stream_ordering, max_summary_stream_ordering - ) - unread_counts = self._get_notif_unread_count_for_user_room( - txn, room_id, user_id, start_unread_stream_ordering + sql = f""" + SELECT + COUNT(CASE WHEN notif = 1 THEN 1 END), + COUNT(CASE WHEN unread = 1 THEN 1 END), + thread_id + FROM event_push_actions + LEFT JOIN ( + SELECT thread_id, MAX(stream_ordering) AS threaded_receipt_stream_ordering + FROM receipts_linearized + LEFT JOIN events USING (room_id, event_id) + WHERE + user_id = ? + AND room_id = ? + AND stream_ordering > ? + AND {receipt_types_clause} + GROUP BY thread_id + ) AS receipts USING (thread_id) + WHERE user_id = ? + AND room_id = ? + AND stream_ordering > COALESCE(threaded_receipt_stream_ordering, ?) + AND NOT {thread_id_clause} + GROUP BY thread_id + """ + txn.execute( + sql, + ( + user_id, + room_id, + unthreaded_receipt_stream_ordering, + *receipts_args, + user_id, + room_id, + unthreaded_receipt_stream_ordering, + *thread_id_args, + ), ) - - for notif_count, unread_count, thread_id in unread_counts: + for notif_count, unread_count, thread_id in txn: counts = _get_thread(thread_id) counts.notify_count += notif_count counts.unread_count += unread_count @@ -526,6 +662,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas user_id: str, stream_ordering: int, max_stream_ordering: Optional[int] = None, + thread_id: Optional[str] = None, ) -> List[Tuple[int, int, str]]: """Returns the notify and unread counts from `event_push_actions` for the given user/room in the given range. @@ -540,6 +677,11 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas stream_ordering: The (exclusive) minimum stream ordering to consider. max_stream_ordering: The (inclusive) maximum stream ordering to consider. If this is not given, then no maximum is applied. + thread_id: The thread ID to fetch unread counts for. If this is not provided + then the results for *all* threads is returned. + + Note that if this is provided the resulting list will only have 0 or + 1 tuples in it. Return: A tuple of the notif count and unread count in the given range for @@ -551,10 +693,10 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas if not self._events_stream_cache.has_entity_changed(room_id, stream_ordering): return [] - clause = "" + stream_ordering_clause = "" args = [user_id, room_id, stream_ordering] if max_stream_ordering is not None: - clause = "AND ea.stream_ordering <= ?" + stream_ordering_clause = "AND ea.stream_ordering <= ?" args.append(max_stream_ordering) # If the max stream ordering is less than the min stream ordering, @@ -562,6 +704,12 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas if max_stream_ordering <= stream_ordering: return [] + # Either limit the results to a specific thread or fetch all threads. + thread_id_clause = "" + if thread_id is not None: + thread_id_clause = "AND thread_id = ?" + args.append(thread_id) + sql = f""" SELECT COUNT(CASE WHEN notif = 1 THEN 1 END), @@ -571,7 +719,8 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas WHERE user_id = ? AND room_id = ? AND ea.stream_ordering > ? - {clause} + {stream_ordering_clause} + {thread_id_clause} GROUP BY thread_id """ @@ -593,7 +742,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas def _get_receipts_by_room_txn( self, txn: LoggingTransaction, user_id: str - ) -> Dict[str, int]: + ) -> Dict[str, _RoomReceipt]: """ Generate a map of room ID to the latest stream ordering that has been read by the given user. @@ -603,7 +752,8 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas user_id: The user to fetch receipts for. Returns: - A map of room ID to stream ordering for all rooms the user has a receipt in. + A map including all rooms the user is in with a receipt. It maps + room IDs to _RoomReceipt instances """ receipt_types_clause, args = make_in_list_sql_clause( self.database_engine, @@ -612,20 +762,26 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas ) sql = f""" - SELECT room_id, MAX(stream_ordering) + SELECT room_id, thread_id, MAX(stream_ordering) FROM receipts_linearized INNER JOIN events USING (room_id, event_id) WHERE {receipt_types_clause} AND user_id = ? - GROUP BY room_id + GROUP BY room_id, thread_id """ args.extend((user_id,)) txn.execute(sql, args) - return { - room_id: latest_stream_ordering - for room_id, latest_stream_ordering in txn.fetchall() - } + + result: Dict[str, _RoomReceipt] = {} + for room_id, thread_id, stream_ordering in txn: + room_receipt = result.setdefault(room_id, _RoomReceipt()) + if thread_id is None: + room_receipt.unthreaded_stream_ordering = stream_ordering + else: + room_receipt.threaded_stream_ordering[thread_id] = stream_ordering + + return result async def get_unread_push_actions_for_user_in_range_for_http( self, @@ -658,9 +814,10 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas def get_push_actions_txn( txn: LoggingTransaction, - ) -> List[Tuple[str, str, int, str, bool]]: + ) -> List[Tuple[str, str, str, int, str, bool]]: sql = """ - SELECT ep.event_id, ep.room_id, ep.stream_ordering, ep.actions, ep.highlight + SELECT ep.event_id, ep.room_id, ep.thread_id, ep.stream_ordering, + ep.actions, ep.highlight FROM event_push_actions AS ep WHERE ep.user_id = ? @@ -670,7 +827,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas ORDER BY ep.stream_ordering ASC LIMIT ? """ txn.execute(sql, (user_id, min_stream_ordering, max_stream_ordering, limit)) - return cast(List[Tuple[str, str, int, str, bool]], txn.fetchall()) + return cast(List[Tuple[str, str, str, int, str, bool]], txn.fetchall()) push_actions = await self.db_pool.runInteraction( "get_unread_push_actions_for_user_in_range_http", get_push_actions_txn @@ -683,10 +840,10 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas stream_ordering=stream_ordering, actions=_deserialize_action(actions, highlight), ) - for event_id, room_id, stream_ordering, actions, highlight in push_actions - # Only include push actions with a stream ordering after any receipt, or without any - # receipt present (invited to but never read rooms). - if stream_ordering > receipts_by_room.get(room_id, 0) + for event_id, room_id, thread_id, stream_ordering, actions, highlight in push_actions + if receipts_by_room.get(room_id, MISSING_ROOM_RECEIPT).is_unread( + thread_id, stream_ordering + ) ] # Now sort it so it's ordered correctly, since currently it will @@ -730,10 +887,10 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas def get_push_actions_txn( txn: LoggingTransaction, - ) -> List[Tuple[str, str, int, str, bool, int]]: + ) -> List[Tuple[str, str, str, int, str, bool, int]]: sql = """ - SELECT ep.event_id, ep.room_id, ep.stream_ordering, ep.actions, - ep.highlight, e.received_ts + SELECT ep.event_id, ep.room_id, ep.thread_id, ep.stream_ordering, + ep.actions, ep.highlight, e.received_ts FROM event_push_actions AS ep INNER JOIN events AS e USING (room_id, event_id) WHERE @@ -744,7 +901,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas ORDER BY ep.stream_ordering DESC LIMIT ? """ txn.execute(sql, (user_id, min_stream_ordering, max_stream_ordering, limit)) - return cast(List[Tuple[str, str, int, str, bool, int]], txn.fetchall()) + return cast(List[Tuple[str, str, str, int, str, bool, int]], txn.fetchall()) push_actions = await self.db_pool.runInteraction( "get_unread_push_actions_for_user_in_range_email", get_push_actions_txn @@ -759,10 +916,10 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas actions=_deserialize_action(actions, highlight), received_ts=received_ts, ) - for event_id, room_id, stream_ordering, actions, highlight, received_ts in push_actions - # Only include push actions with a stream ordering after any receipt, or without any - # receipt present (invited to but never read rooms). - if stream_ordering > receipts_by_room.get(room_id, 0) + for event_id, room_id, thread_id, stream_ordering, actions, highlight, received_ts in push_actions + if receipts_by_room.get(room_id, MISSING_ROOM_RECEIPT).is_unread( + thread_id, stream_ordering + ) ] # Now sort it so it's ordered correctly, since currently it will @@ -1086,7 +1243,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas ) sql = """ - SELECT r.stream_id, r.room_id, r.user_id, e.stream_ordering + SELECT r.stream_id, r.room_id, r.user_id, r.thread_id, e.stream_ordering FROM receipts_linearized AS r INNER JOIN events AS e USING (event_id) WHERE ? < r.stream_id AND r.stream_id <= ? AND user_id LIKE ? @@ -1107,45 +1264,69 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas limit, ), ) - rows = cast(List[Tuple[int, str, str, int]], txn.fetchall()) + rows = cast(List[Tuple[int, str, str, Optional[str], int]], txn.fetchall()) # For each new read receipt we delete push actions from before it and # recalculate the summary. - for _, room_id, user_id, stream_ordering in rows: + # + # Care must be taken of whether it is a threaded or unthreaded receipt. + for _, room_id, user_id, thread_id, stream_ordering in rows: # Only handle our own read receipts. if not self.hs.is_mine_id(user_id): continue + thread_clause = "" + thread_args: Tuple = () + if thread_id is not None: + thread_clause = "AND thread_id = ?" + thread_args = (thread_id,) + + # For each new read receipt we delete push actions from before it and + # recalculate the summary. txn.execute( - """ + f""" DELETE FROM event_push_actions WHERE room_id = ? AND user_id = ? AND stream_ordering <= ? AND highlight = 0 + {thread_clause} """, - (room_id, user_id, stream_ordering), + (room_id, user_id, stream_ordering, *thread_args), ) # Fetch the notification counts between the stream ordering of the # latest receipt and what was previously summarised. unread_counts = self._get_notif_unread_count_for_user_room( - txn, room_id, user_id, stream_ordering, old_rotate_stream_ordering - ) - - # First mark the summary for all threads in the room as cleared. - self.db_pool.simple_update_txn( txn, - table="event_push_summary", - keyvalues={"user_id": user_id, "room_id": room_id}, - updatevalues={ - "notif_count": 0, - "unread_count": 0, - "stream_ordering": old_rotate_stream_ordering, - "last_receipt_stream_ordering": stream_ordering, - }, + room_id, + user_id, + stream_ordering, + old_rotate_stream_ordering, + thread_id, ) + # For an unthreaded receipt, mark the summary for all threads in the room + # as cleared. + if thread_id is None: + self.db_pool.simple_update_txn( + txn, + table="event_push_summary", + keyvalues={"user_id": user_id, "room_id": room_id}, + updatevalues={ + "notif_count": 0, + "unread_count": 0, + "stream_ordering": old_rotate_stream_ordering, + "last_receipt_stream_ordering": stream_ordering, + }, + ) + + # For a threaded receipt, we *always* want to update that receipt, + # event if there are no new notifications in that thread. This ensures + # the stream_ordering & last_receipt_stream_ordering are updated. + elif not unread_counts: + unread_counts = [(0, 0, thread_id)] + # Then any updated threads get their notification count and unread # count updated. self.db_pool.simple_update_many_txn( @@ -1153,8 +1334,16 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas table="event_push_summary", key_names=("room_id", "user_id", "thread_id"), key_values=[(room_id, user_id, row[2]) for row in unread_counts], - value_names=("notif_count", "unread_count"), - value_values=[(row[0], row[1]) for row in unread_counts], + value_names=( + "notif_count", + "unread_count", + "stream_ordering", + "last_receipt_stream_ordering", + ), + value_values=[ + (row[0], row[1], old_rotate_stream_ordering, stream_ordering) + for row in unread_counts + ], ) # We always update `event_push_summary_last_receipt_stream_id` to diff --git a/synapse/storage/databases/main/relations.py b/synapse/storage/databases/main/relations.py index 898947af95..154385b1e8 100644 --- a/synapse/storage/databases/main/relations.py +++ b/synapse/storage/databases/main/relations.py @@ -832,6 +832,42 @@ class RelationsWorkerStore(SQLBaseStore): "get_event_relations", _get_event_relations ) + @cached() + async def get_thread_id(self, event_id: str) -> Optional[str]: + """ + Get the thread ID for an event. This considers multi-level relations, + e.g. an annotation to an event which is part of a thread. + + Args: + event_id: The event ID to fetch the thread ID for. + + Returns: + The event ID of the root event in the thread, if this event is part + of a thread. None, otherwise. + """ + # Since event relations form a tree, we should only ever find 0 or 1 + # results from the below query. + sql = """ + WITH RECURSIVE related_events AS ( + SELECT event_id, relates_to_id, relation_type + FROM event_relations + WHERE event_id = ? + UNION SELECT e.event_id, e.relates_to_id, e.relation_type + FROM event_relations e + INNER JOIN related_events r ON r.relates_to_id = e.event_id + ) SELECT relates_to_id FROM related_events WHERE relation_type = 'm.thread'; + """ + + def _get_thread_id(txn: LoggingTransaction) -> Optional[str]: + txn.execute(sql, (event_id,)) + # TODO Should we ensure there's only a single result here? + row = txn.fetchone() + if row: + return row[0] + return None + + return await self.db_pool.runInteraction("get_thread_id", _get_thread_id) + class RelationsStore(RelationsWorkerStore): pass diff --git a/synapse/storage/schema/main/delta/73/08thread_receipts_non_null.sql.postgres b/synapse/storage/schema/main/delta/73/08thread_receipts_non_null.sql.postgres new file mode 100644 index 0000000000..3e0bc9e5eb --- /dev/null +++ b/synapse/storage/schema/main/delta/73/08thread_receipts_non_null.sql.postgres @@ -0,0 +1,23 @@ +/* Copyright 2022 The Matrix.org Foundation C.I.C + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +-- Drop constraint on (room_id, receipt_type, user_id). + +-- Rebuild the unique constraint with the thread_id. +ALTER TABLE receipts_linearized + DROP CONSTRAINT receipts_linearized_uniqueness; + +ALTER TABLE receipts_graph + DROP CONSTRAINT receipts_graph_uniqueness; diff --git a/synapse/storage/schema/main/delta/73/08thread_receipts_non_null.sql.sqlite b/synapse/storage/schema/main/delta/73/08thread_receipts_non_null.sql.sqlite new file mode 100644 index 0000000000..e664889fbc --- /dev/null +++ b/synapse/storage/schema/main/delta/73/08thread_receipts_non_null.sql.sqlite @@ -0,0 +1,76 @@ +/* Copyright 2022 The Matrix.org Foundation C.I.C + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +-- Drop constraint on (room_id, receipt_type, user_id). +-- +-- SQLite doesn't support modifying constraints to an existing table, so it must +-- be recreated. + +-- Create the new tables. +CREATE TABLE receipts_linearized_new ( + stream_id BIGINT NOT NULL, + room_id TEXT NOT NULL, + receipt_type TEXT NOT NULL, + user_id TEXT NOT NULL, + event_id TEXT NOT NULL, + thread_id TEXT, + event_stream_ordering BIGINT, + data TEXT NOT NULL, + CONSTRAINT receipts_linearized_uniqueness_thread UNIQUE (room_id, receipt_type, user_id, thread_id) +); + +CREATE TABLE receipts_graph_new ( + room_id TEXT NOT NULL, + receipt_type TEXT NOT NULL, + user_id TEXT NOT NULL, + event_ids TEXT NOT NULL, + thread_id TEXT, + data TEXT NOT NULL, + CONSTRAINT receipts_graph_uniqueness_thread UNIQUE (room_id, receipt_type, user_id, thread_id) +); + +-- Drop the old indexes. +DROP INDEX IF EXISTS receipts_linearized_id; +DROP INDEX IF EXISTS receipts_linearized_room_stream; +DROP INDEX IF EXISTS receipts_linearized_user; + +-- Copy the data. +INSERT INTO receipts_linearized_new (stream_id, room_id, receipt_type, user_id, event_id, data) + SELECT stream_id, room_id, receipt_type, user_id, event_id, data + FROM receipts_linearized; +INSERT INTO receipts_graph_new (room_id, receipt_type, user_id, event_ids, data) + SELECT room_id, receipt_type, user_id, event_ids, data + FROM receipts_graph; + +-- Drop the old tables. +DROP TABLE receipts_linearized; +DROP TABLE receipts_graph; + +-- Rename the tables. +ALTER TABLE receipts_linearized_new RENAME TO receipts_linearized; +ALTER TABLE receipts_graph_new RENAME TO receipts_graph; + +-- Create the indices. +CREATE INDEX receipts_linearized_id ON receipts_linearized( stream_id ); +CREATE INDEX receipts_linearized_room_stream ON receipts_linearized( room_id, stream_id ); +CREATE INDEX receipts_linearized_user ON receipts_linearized( user_id ); + +-- Re-run background updates from 72/08thread_receipts.sql. +INSERT INTO background_updates (ordering, update_name, progress_json) VALUES + (7308, 'receipts_linearized_unique_index', '{}') + ON CONFLICT (update_name) DO NOTHING; +INSERT INTO background_updates (ordering, update_name, progress_json) VALUES + (7308, 'receipts_graph_unique_index', '{}') + ON CONFLICT (update_name) DO NOTHING; diff --git a/tests/storage/test_event_push_actions.py b/tests/storage/test_event_push_actions.py index 89f986ac34..ee48920f84 100644 --- a/tests/storage/test_event_push_actions.py +++ b/tests/storage/test_event_push_actions.py @@ -16,6 +16,7 @@ from typing import Optional, Tuple from twisted.test.proto_helpers import MemoryReactor +from synapse.api.constants import MAIN_TIMELINE, RelationTypes from synapse.rest import admin from synapse.rest.client import login, room from synapse.server import HomeServer @@ -65,16 +66,23 @@ class EventPushActionsStoreTestCase(HomeserverTestCase): user_id, token, _, other_token, room_id = self._create_users_and_room() # Create two events, one of which is a highlight. - self.helper.send_event( + first_event_id = self.helper.send_event( room_id, type="m.room.message", content={"msgtype": "m.text", "body": "msg"}, tok=other_token, - ) - event_id = self.helper.send_event( + )["event_id"] + second_event_id = self.helper.send_event( room_id, type="m.room.message", - content={"msgtype": "m.text", "body": user_id}, + content={ + "msgtype": "m.text", + "body": user_id, + "m.relates_to": { + "rel_type": RelationTypes.THREAD, + "event_id": first_event_id, + }, + }, tok=other_token, )["event_id"] @@ -94,13 +102,13 @@ class EventPushActionsStoreTestCase(HomeserverTestCase): ) self.assertEqual(2, len(email_actions)) - # Send a receipt, which should clear any actions. + # Send a receipt, which should clear the first action. self.get_success( self.store.insert_receipt( room_id, "m.read", user_id=user_id, - event_ids=[event_id], + event_ids=[first_event_id], thread_id=None, data={}, ) @@ -110,6 +118,30 @@ class EventPushActionsStoreTestCase(HomeserverTestCase): user_id, 0, 1000, 20 ) ) + self.assertEqual(1, len(http_actions)) + email_actions = self.get_success( + self.store.get_unread_push_actions_for_user_in_range_for_email( + user_id, 0, 1000, 20 + ) + ) + self.assertEqual(1, len(email_actions)) + + # Send a thread receipt to clear the thread action. + self.get_success( + self.store.insert_receipt( + room_id, + "m.read", + user_id=user_id, + event_ids=[second_event_id], + thread_id=first_event_id, + data={}, + ) + ) + http_actions = self.get_success( + self.store.get_unread_push_actions_for_user_in_range_for_http( + user_id, 0, 1000, 20 + ) + ) self.assertEqual([], http_actions) email_actions = self.get_success( self.store.get_unread_push_actions_for_user_in_range_for_email( @@ -312,7 +344,7 @@ class EventPushActionsStoreTestCase(HomeserverTestCase): def _rotate() -> None: self.get_success(self.store._rotate_notifs()) - def _mark_read(event_id: str, thread_id: Optional[str] = None) -> None: + def _mark_read(event_id: str, thread_id: str = MAIN_TIMELINE) -> None: self.get_success( self.store.insert_receipt( room_id, @@ -348,9 +380,12 @@ class EventPushActionsStoreTestCase(HomeserverTestCase): _create_event() _create_event(thread_id=thread_id) _mark_read(event_id) + _assert_counts(1, 0, 3, 0) + _mark_read(event_id, thread_id) _assert_counts(1, 0, 1, 0) _mark_read(last_event_id) + _mark_read(last_event_id, thread_id) _assert_counts(0, 0, 0, 0) _create_event() @@ -364,6 +399,7 @@ class EventPushActionsStoreTestCase(HomeserverTestCase): _assert_counts(1, 0, 1, 0) _mark_read(last_event_id) + _mark_read(last_event_id, thread_id) _assert_counts(0, 0, 0, 0) _create_event(True) @@ -389,11 +425,183 @@ class EventPushActionsStoreTestCase(HomeserverTestCase): # Check that sending read receipts at different points results in the # right counts. _mark_read(event_id) + _assert_counts(1, 0, 2, 1) + _mark_read(event_id, thread_id) + _assert_counts(1, 0, 1, 0) + _mark_read(last_event_id) + _assert_counts(0, 0, 1, 0) + _mark_read(last_event_id, thread_id) + _assert_counts(0, 0, 0, 0) + + _create_event(True) + _create_event(True, thread_id) + _assert_counts(1, 1, 1, 1) + _mark_read(last_event_id) + _mark_read(last_event_id, thread_id) + _assert_counts(0, 0, 0, 0) + _rotate() + _assert_counts(0, 0, 0, 0) + + def test_count_aggregation_mixed(self) -> None: + """ + This is essentially the same test as test_count_aggregation_threads, but + sends both unthreaded and threaded receipts. + """ + + user_id, token, _, other_token, room_id = self._create_users_and_room() + thread_id: str + + last_event_id: str + + def _assert_counts( + noitf_count: int, + highlight_count: int, + thread_notif_count: int, + thread_highlight_count: int, + ) -> None: + counts = self.get_success( + self.store.db_pool.runInteraction( + "get-unread-counts", + self.store._get_unread_counts_by_receipt_txn, + room_id, + user_id, + ) + ) + self.assertEqual( + counts.main_timeline, + NotifCounts( + notify_count=noitf_count, + unread_count=0, + highlight_count=highlight_count, + ), + ) + if thread_notif_count or thread_highlight_count: + self.assertEqual( + counts.threads, + { + thread_id: NotifCounts( + notify_count=thread_notif_count, + unread_count=0, + highlight_count=thread_highlight_count, + ), + }, + ) + else: + self.assertEqual(counts.threads, {}) + + def _create_event( + highlight: bool = False, thread_id: Optional[str] = None + ) -> str: + content: JsonDict = { + "msgtype": "m.text", + "body": user_id if highlight else "msg", + } + if thread_id: + content["m.relates_to"] = { + "rel_type": "m.thread", + "event_id": thread_id, + } + + result = self.helper.send_event( + room_id, + type="m.room.message", + content=content, + tok=other_token, + ) + nonlocal last_event_id + last_event_id = result["event_id"] + return last_event_id + + def _rotate() -> None: + self.get_success(self.store._rotate_notifs()) + + def _mark_read(event_id: str, thread_id: Optional[str] = None) -> None: + self.get_success( + self.store.insert_receipt( + room_id, + "m.read", + user_id=user_id, + event_ids=[event_id], + thread_id=thread_id, + data={}, + ) + ) + + _assert_counts(0, 0, 0, 0) + thread_id = _create_event() + _assert_counts(1, 0, 0, 0) + _rotate() + _assert_counts(1, 0, 0, 0) + + _create_event(thread_id=thread_id) + _assert_counts(1, 0, 1, 0) + _rotate() + _assert_counts(1, 0, 1, 0) + + _create_event() + _assert_counts(2, 0, 1, 0) + _rotate() + _assert_counts(2, 0, 1, 0) + + event_id = _create_event(thread_id=thread_id) + _assert_counts(2, 0, 2, 0) + _rotate() + _assert_counts(2, 0, 2, 0) + + _create_event() + _create_event(thread_id=thread_id) + _mark_read(event_id) + _assert_counts(1, 0, 1, 0) + + _mark_read(last_event_id, MAIN_TIMELINE) + _mark_read(last_event_id, thread_id) + _assert_counts(0, 0, 0, 0) + + _create_event() + _create_event(thread_id=thread_id) + _assert_counts(1, 0, 1, 0) + _rotate() + _assert_counts(1, 0, 1, 0) + + # Delete old event push actions, this should not affect the (summarised) count. + self.get_success(self.store._remove_old_push_actions_that_have_rotated()) _assert_counts(1, 0, 1, 0) + _mark_read(last_event_id) _assert_counts(0, 0, 0, 0) _create_event(True) + _assert_counts(1, 1, 0, 0) + _rotate() + _assert_counts(1, 1, 0, 0) + + event_id = _create_event(True, thread_id) + _assert_counts(1, 1, 1, 1) + _rotate() + _assert_counts(1, 1, 1, 1) + + # Check that adding another notification and rotating after highlight + # works. + _create_event() + _rotate() + _assert_counts(2, 1, 1, 1) + + _create_event(thread_id=thread_id) + _rotate() + _assert_counts(2, 1, 2, 1) + + # Check that sending read receipts at different points results in the + # right counts. + _mark_read(event_id) + _assert_counts(1, 0, 1, 0) + _mark_read(event_id, MAIN_TIMELINE) + _assert_counts(1, 0, 1, 0) + _mark_read(last_event_id, MAIN_TIMELINE) + _assert_counts(0, 0, 1, 0) + _mark_read(last_event_id, thread_id) + _assert_counts(0, 0, 0, 0) + + _create_event(True) _create_event(True, thread_id) _assert_counts(1, 1, 1, 1) _mark_read(last_event_id) @@ -401,6 +609,106 @@ class EventPushActionsStoreTestCase(HomeserverTestCase): _rotate() _assert_counts(0, 0, 0, 0) + def test_recursive_thread(self) -> None: + """ + Events related to events in a thread should still be considered part of + that thread. + """ + + # Create a user to receive notifications and send receipts. + user_id = self.register_user("user1235", "pass") + token = self.login("user1235", "pass") + + # And another users to send events. + other_id = self.register_user("other", "pass") + other_token = self.login("other", "pass") + + # Create a room and put both users in it. + room_id = self.helper.create_room_as(user_id, tok=token) + self.helper.join(room_id, other_id, tok=other_token) + + # Update the user's push rules to care about reaction events. + self.get_success( + self.store.add_push_rule( + user_id, + "related_events", + priority_class=5, + conditions=[ + {"kind": "event_match", "key": "type", "pattern": "m.reaction"} + ], + actions=["notify"], + ) + ) + + def _create_event(type: str, content: JsonDict) -> str: + result = self.helper.send_event( + room_id, type=type, content=content, tok=other_token + ) + return result["event_id"] + + def _assert_counts(noitf_count: int, thread_notif_count: int) -> None: + counts = self.get_success( + self.store.db_pool.runInteraction( + "get-unread-counts", + self.store._get_unread_counts_by_receipt_txn, + room_id, + user_id, + ) + ) + self.assertEqual( + counts.main_timeline, + NotifCounts( + notify_count=noitf_count, unread_count=0, highlight_count=0 + ), + ) + if thread_notif_count: + self.assertEqual( + counts.threads, + { + thread_id: NotifCounts( + notify_count=thread_notif_count, + unread_count=0, + highlight_count=0, + ), + }, + ) + else: + self.assertEqual(counts.threads, {}) + + # Create a root event. + thread_id = _create_event( + "m.room.message", {"msgtype": "m.text", "body": "msg"} + ) + _assert_counts(1, 0) + + # Reply, creating a thread. + reply_id = _create_event( + "m.room.message", + { + "msgtype": "m.text", + "body": "msg", + "m.relates_to": { + "rel_type": "m.thread", + "event_id": thread_id, + }, + }, + ) + _assert_counts(1, 1) + + # Create an event related to a thread event, this should still appear in + # the thread. + _create_event( + type="m.reaction", + content={ + "m.relates_to": { + "rel_type": "m.annotation", + "event_id": reply_id, + "key": "A", + } + }, + ) + _assert_counts(1, 2) + def test_find_first_stream_ordering_after_ts(self) -> None: def add_event(so: int, ts: int) -> None: self.get_success( |