| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Some argument finagling was needed as query_local_devices can be called
from requests of both local and remote users, and in the case of remote
users, without a user ID.
In the end, we have an option 'from_local_user_id' which tells
`query_local_devices` both a) whether the request is from a local or
remote user and b) if a local user, which one.
|
|
|
|
|
|
|
|
| |
MSC3952 defines push rules which searches for mentions in a list of
Matrix IDs in the event body, instead of searching the entire event
body for display name / local part.
This is implemented behind an experimental configuration flag and
does not yet implement the backwards compatibility pieces of the MSC.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Batch look-ups to see if rooms are partial stated.
* Fix issues found in linting.
* Fix typo.
* Apply suggestions from code review
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Clarify comments.
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Also improve the cache size while we're at it
* is_partial_state_rooms -> is_partial_state_room_batched
* Run `black`
* Improve annotation for `simple_select_many_batch`
* Fix is_partial_state_room_batched impl
* Okay, _actually_ fix impl
* Update description.
* Update synapse/storage/databases/main/room.py
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
* Run black.
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
Co-authored-by: David Robertson <davidr@element.io>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
On startup, the `_device_list_id_gen` stream id generator is initialized
using the maximum stream id seen in a list of tables. When we started
populating the `device_list_remote_pending` table in #13913, we forgot
to add it to the aforementioned list of tables, so the stream id
generator can hand out old stream ids after a restart. The end result is
that Synapse can fail to handle device list update EDUs after a restart
when a partial state join is in progress.
Add the `device_list_remote_pending` table to the list of tables to
consider when initializing the `_device_list_id_gen` stream id generator.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
| |
| |
| |
| | |
For better type safety we use an enum instead of strings to
configure direction (backwards or forwards).
|
|/
|
|
|
| |
The `/relations` endpoint was not properly handle "live tokens"
(i.e sync tokens), to do this properly we abstract the code that
`/messages` has and re-use it.
|
|
|
|
|
|
|
|
|
|
| |
#13873 introduced a regression which causes sqlite database migrations
to no longer run inside a transaction. Wrap them in a transaction again,
to avoid database corruption when migrations are interrupted.
Fixes #14909.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Request partial joins by default
This is a little sloppy, but we are trying to gain confidence in faster
joins in the upcoming RC.
Admins can still opt out by adding the following to their Synapse
config:
```yaml
experimental:
faster_joins: false
```
We may revert this change before the release proper, depending on how
testing in the wild goes.
* Changelog
* Try to fix the backfill test failures
* Upgrade notes
* Postgres compat?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#14870)
* Allow `AbstractSet` in `StrCollection`
Or else frozensets are excluded. This will be useful in an upcoming
commit where I plan to change a function that accepts `List[str]` to
accept `StrCollection` instead.
* `rooms_to_exclude` -> `rooms_to_exclude_globally`
I am about to make use of this exclusion mechanism to exclude rooms for
a specific user and a specific sync. This rename helps to clarify the
distinction between the global config and the rooms to exclude for a
specific sync.
* Better function names for internal sync methods
* Track a list of excluded rooms on SyncResultBuilder
I plan to feed a list of partially stated rooms for this sync to ignore
* Exclude partial state rooms during eager sync
using the mechanism established in the previous commit
* Track un-partial-state stream in sync tokens
So that we can work out which rooms have become fully-stated during a
given sync period.
* Fix mutation of `@cached` return value
This was fouling up a complement test added alongside this PR.
Excluding a room would mean the set of forgotten rooms in the cache
would be extended. This means that room could be erroneously considered
forgotten in the future.
Introduced in #12310, Synapse 1.57.0. I don't think this had any
user-visible side effects (until now).
* SyncResultBuilder: track rooms to force as newly joined
Similar plan as before. We've omitted rooms from certain sync responses;
now we establish the mechanism to reintroduce them into future syncs.
* Read new field, to present rooms as newly joined
* Force un-partial-stated rooms to be newly-joined
for eager incremental syncs only, provided they're still fully stated
* Notify user stream listeners to wake up long polling syncs
* Changelog
* Typo fix
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Unnecessary list cast
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Rephrase comment
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Another comment
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Fixup merge(?)
* Poke notifier when receiving un-partial-stated msg over replication
* Fixup merge whoops
Thanks MV :)
Co-authored-by: Mathieu Velen <mathieuv@matrix.org>
Co-authored-by: Mathieu Velten <mathieuv@matrix.org>
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
|
|
|
|
|
|
|
| |
* Skip processing stats for broken rooms.
* Newsfragment
* Use a custom exception.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
finishing join (#14874)
* Faster joins: Update room stats and user directory on workers when done
When finishing a partial state join to a room, we update the current
state of the room without persisting additional events. Workers receive
notice of the current state update over replication, but neglect to wake
the room stats and user directory updaters, which then get incidentally
triggered the next time an event is persisted or an unrelated event
persister sends out a stream position update.
We wake the room stats and user directory updaters at the appropriate
time in this commit.
Part of #12814 and #12815.
Signed-off-by: Sean Quah <seanq@matrix.org>
* fixup comment
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Enable Complement tests for Faster Remote Room Joins on worker-mode
* (dangerous) Add an override to allow Complement to use FRRJ under workers
* Newsfile
Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
* Fix race where we didn't send out replication notification
* MORE HACKS
* Fix get_un_partial_stated_rooms_token to take instance_name
* Fix bad merge
* Remove warning
* Correctly advance un_partial_stated_room_stream
* Fix merge
* Add another notify_replication
* Fixups
* Create a separate ReplicationNotifier
* Fix test
* Fix portdb
* Create a separate ReplicationNotifier
* Fix test
* Fix portdb
* Fix presence test
* Newsfile
* Apply suggestions from code review
* Update changelog.d/14752.misc
Co-authored-by: Erik Johnston <erik@matrix.org>
* lint
Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
Co-authored-by: Erik Johnston <erik@matrix.org>
|
|
|
| |
This ensures that all other workers are told about stream updates in a timely manner, without having to remember to manually poke replication.
|
| |
|
|
|
|
| |
This should hopefully mitigate a class of races where data gets out of
sync due a HTTP replication request racing with the replication streams.
|
| |
|
| |
|
| |
|
|\ |
|
| |
| |
| | |
Fixes #14814
|
|/
|
|
| |
for jumping to a specific date in the timeline of a room. (#14799)
|
|
|
|
| |
devices. (#14716)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This creates a new store method, `process_replication_position` that
is called after `process_replication_rows`. By moving stream ID advances
here this guarantees any relevant cache invalidations will have been
applied before the stream is advanced.
This avoids race conditions where Python switches between threads mid
way through processing the `process_replication_rows` method where stream
IDs may be advanced before caches are invalidated due to class resolution
ordering.
See this comment/issue for further discussion:
https://github.com/matrix-org/synapse/issues/14158#issuecomment-1344048703
|
| |
|
|
|
|
| |
receiving un-partial-stated event notifications over replication. [rei:frrj/streams/unpsr] (#14546)
|
|
|
|
| |
replication. [rei:frrj/streams/unpsr] (#14545)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
if a Synapse deployment upgraded (from < 1.62.0 to >= 1.70.0) then it
is possible for schema deltas to run before background updates causing
drift in the database schema due to:
1. A delta registered a background update to create an index.
2. A delta dropped the above index if it exists (but it yet exist won't since
the background job hasn't run).
3. The code assumed the index was dropped.
To fix this we:
1. Cancel the background update which could create the index.
2. Drop the index again.
3. Drop a related index which is dropped by the background update.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Declare new config
* Parse new config
* Read new config
* Don't use trial/our TestCase where it's not needed
Before:
```
$ time trial tests/events/test_utils.py > /dev/null
real 0m2.277s
user 0m2.186s
sys 0m0.083s
```
After:
```
$ time trial tests/events/test_utils.py > /dev/null
real 0m0.566s
user 0m0.508s
sys 0m0.056s
```
* Helper to upsert to event fields
without exceeding size limits.
* Use helper when adding invite/knock state
Now that we allow admins to include events in prejoin room state with
arbitrary state keys, be a good Matrix citizen and ensure they don't
accidentally create an oversized event.
* Changelog
* Move StateFilter tests
should have done this in #14668
* Add extra methods to StateFilter
* Use StateFilter
* Ensure test file enforces typed defs; alphabetise
* Workaround surprising get_current_state_ids
* Whoops, fix mypy
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Enable `--warn-redundant-casts` option in mypy
Doesn't do much but helps me sleep better at night.
* Changelog
* Fix name of the ignore
* Fix one more missed cast
Not sure why I didn't see this one locally, maybe I needed a poetry update
* Remove old comment
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
|
|
|
|
|
| |
* Move `StateFilter` to `synapse.types`
* Changelog
|
| |
|
|
|
|
|
|
|
| |
Fixes #13655
This change uses ICU (International Components for Unicode) to improve boundary detection in user search.
This change also adds a new dependency on libicu-dev and pkg-config for the Debian packages, which are available in all supported distros.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When Synapse is terminated while running the background update to create
the `receipts_graph` or `receipts_linearized` indexes, the indexes may
be successfully created (or marked as invalid on postgres) while the
background update remains unfinished. When Synapse next starts up, the
background update will fail because the index already exists, or exists
but is invalid on postgres.
Use the existing code to create indices in background updates, since it
handles these edge cases.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
|
| |
Adds missing type hints to `tests.storage` package
and does not allow untyped definitions.
|
| |
|
|
|
|
|
| |
This should help reduce the number of devices e.g. simple bots the repeatedly login rack up.
We only delete non-e2e devices as they should be safe to delete, whereas if we delete e2e devices for a user we may accidentally break their ability to receive e2e keys for a message.
|
|
|
|
|
|
|
|
|
|
|
| |
Due to the various fixes to the StreamChangeCache it is not
safe to trust the information in the user directory or room/user
stats tables. Rebuild them as background jobs.
In particular see da777207528513c858395758bf4c023da2c2c1a3 (#14639),
and 6a8310f3dfe77acf59df2fe3e88a71b85b9b3ecc (#14435).
Maybe also be related to fac8a38525387e344e3595a092578e0ffedd49ae
(#14592).
|
|
|
|
| |
than requested. (#14631)
|
|
|
|
|
|
|
| |
A batch of changes intended to make it easier to trace to-device messages through the system.
The intention here is that a client can set a property org.matrix.msgid in any to-device message it sends. That ID is then included in any tracing or logging related to the message. (Suggestions as to where this field should be documented welcome. I'm not enthusiastic about speccing it - it's very much an optional extra to help with debugging.)
I've also generally improved the data we send to opentracing for these messages.
|
|
|
|
| |
Help callers from using the return value incorrectly by ensuring
that callers explicitly check if there was a cache hit or not.
|
|
|
|
| |
replication. [rei:frrj/streams/unpsr] (#14473)
|
|
|
|
|
|
| |
StreamChangeCache.get_all_changed_entities can return None to signify
it does not have information at the given stream position. Two callers (related
to device lists and presence) were treating this response the same as an empty
list (i.e. there being no updates).
|
|\ |
|
| | |
|
| |
| |
| |
| | |
users with more than 50 non-E2E devices (#14580)
|
|/
|
|
|
|
| |
Fetch the unread notification counts used by the badge counts
in push notifications for all rooms at once (instead of fetching
them per room).
|
|
|
|
|
|
|
|
| |
This should help reduce the number of devices e.g. simple bots the repeatedly login rack up.
We only delete non-e2e devices as they should be safe to delete, whereas if we delete e2e devices for a user we may accidentally break their ability to receive e2e keys for a message.
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
|
|
|
|
|
|
|
|
|
|
|
| |
* Support MSC1767's `content.body` behaviour in push rules
* Add the base rules from MSC3933
* Changelog entry
* Flip condition around for finding `m.markup`
* Remove forgotten import
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Use `device_one_time_keys_count` to match MSC3202
Rename the `device_one_time_key_counts` key in responses to
`device_one_time_keys_count` to match the name specified by MSC3202.
Also change related variable/class names for consistency.
Signed-off-by: Andrew Ferrazzutti <andrewf@element.io>
* Update changelog.d/14565.misc
* Revert name change for `one_time_key_counts` key
as this is a different key altogether from `device_one_time_keys_count`,
which is used for `/sync` instead of appservice transactions.
Signed-off-by: Andrew Ferrazzutti <andrewf@element.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To perform an emulated upsert into a table safely, we must either:
* lock the table,
* be the only writer upserting into the table
* or rely on another unique index being present.
When the 2nd or 3rd cases were applicable, we previously avoided locking
the table as an optimization. However, as seen in #14406, it is easy to
slip up when adding new schema deltas and corrupt the database.
The only time we lock when performing emulated upserts is while waiting
for background updates on postgres. On sqlite, we do no locking at all.
Let's remove the option to skip locking tables, so that we don't shoot
ourselves in the foot again.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
| |
Fixes https://github.com/matrix-org/synapse/issues/14536
|
|
|
|
| |
This helps avoid reading unnecessarily large amounts of data from the
table when querying with a set of room IDs.
|
|
|
| |
Fix #14108
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a local device list change is added to
`device_lists_changes_in_room`, the `converted_to_destinations` flag is
set to `FALSE` and the `_handle_new_device_update_async` background
process is started. This background process looks for unconverted rows
in `device_lists_changes_in_room`, copies them to
`device_lists_outbound_pokes` and updates the flag.
To update the `converted_to_destinations` flag, the database performs a
`DELETE` and `INSERT` internally, which fragments the table. To avoid
this, track unconverted rows using a `(stream ID, room ID)` position
instead of the flag.
From now on, the `converted_to_destinations` column indicates rows that
need converting to outbound pokes, but does not indicate whether the
conversion has already taken place.
Closes #14037.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
|
|
|
|
|
|
| |
Avoid an n+1 query problem and fetch the bundled aggregations for
m.reference relations in a single query instead of a query per event.
This applies similar logic for as was previously done for edits in
8b309adb436c162510ed1402f33b8741d71fc058 (#11660; threads
in b65acead428653b988351ae8d7b22127a22039cd (#11752); and
annotations in 1799a54a545618782840a60950ef4b64da9ee24d (#14491).
|
|
|
|
|
|
|
|
| |
Avoid an n+1 query problem and fetch the bundled aggregations for
m.annotation relations in a single query instead of a query per event.
This applies similar logic for as was previously done for edits in
8b309adb436c162510ed1402f33b8741d71fc058 (#11660) and threads
in b65acead428653b988351ae8d7b22127a22039cd (#11752).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add tests for StreamIdGenerator
* Drive-by: annotate all defs
* Revert "Revert "Remove slaved id tracker (#14376)" (#14463)"
This reverts commit d63814fd736fed5d3d45ff3af5e6d3bfae50c439, which in
turn reverted 36097e88c4da51fce6556a58c49bd675f4cf20ab. This restores
the latter.
* Fix StreamIdGenerator not handling unpersisted IDs
Spotted by @erikjohnston.
Closes #14456.
* Changelog
Co-authored-by: Nick Mills-Barrett <nick@fizzadar.com>
Co-authored-by: Erik Johnston <erik@matrix.org>
|
|
|
|
|
|
|
| |
Remove type hints from comments which have been added
as Python type hints. This helps avoid drift between comments
and reality, as well as removing redundant information.
Also adds some missing type hints which were simple to fill in.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As part of the database migration to support threaded receipts, there is
a possible window in between
`73/08thread_receipts_non_null.sql.postgres` removing the original
unique constraints on `receipts_linearized` and `receipts_graph` and the
`reeipts_linearized_unique_index` and `receipts_graph_unique_index`
background updates from `72/08thread_receipts.sql` completing where
the unique constraints on `receipts_linearized` and `receipts_graph` are
missing. Any emulated upserts on these tables must therefore be
performed with a lock held, otherwise duplicate rows can end up in the
tables when there are concurrent emulated upserts. Fix the missing lock.
Note that emulated upserts no longer happen by default on sqlite, since
the minimum supported version of sqlite supports native upserts by
default now.
Finally, clean up any duplicate receipts that may have crept in before
trying to create the `receipts_graph_unique_index` and
`receipts_linearized_unique_index` unique indexes.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
| |
This reverts commit 36097e88c4da51fce6556a58c49bd675f4cf20ab.
|
|
|
|
|
|
|
|
|
|
|
| |
* Pull out hero selection logic
* Include heroes in partial join response's state
* Changelog
* Fixup trial test
* Remove TODO
|
|
|
|
| |
just give you completely arbitrary partial-state events. (#14417)
|
|
|
|
|
| |
This matches the multi instance writer ID generator class which can
both handle advancing the current token over replication and by calling
the database.
|
|
|
|
| |
By removing unused variables and making some arguments
required which are always provided.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
PostgreSQL may underestimate the number of distinct `room_id`s in
`event_search`, which can cause it to use table scans for queries for
multiple rooms.
Fix this by setting `n_distinct` on the column.
Resolves #14402.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
| |
|
|
|
| |
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
|
|
| |
When this background update did its last batch, it would try to update all the
events that had been inserted since the bgupdate started, which could cause a
table-scan. Make sure we limit the update correctly.
|
|
|
|
|
| |
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
|
|
|
| |
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
|
|
|
|
|
|
|
| |
If configured an OIDC IdP can log a user's session out of
Synapse when they log out of the identity provider.
The IdP sends a request directly to Synapse (and must be
configured with an endpoint) when a user logs out.
|
|
|
|
| |
(#14304)
|
|
|
|
| |
For ease of reading we switch from concatenated strings to
triple quote strings.
|
|
|
|
| |
(`get_users_in_room` mis-use) (#13958)
|
|
|
|
|
|
|
| |
PostgreSQL 14 changed the behavior of `websearch_to_tsquery` to
improve some behaviour.
The tests were hitting those edge-cases about handling of hanging double
quotes. This fixes the tests to take into account the PostgreSQL version.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix presence bug introduced in 1.64 by #13313
Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
* Add changelog
* Add DISTINCT
* Apply suggestions from code review
Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
|
|
|
|
|
|
|
|
|
|
|
| |
* Save login tokens in database
Signed-off-by: Quentin Gliech <quenting@element.io>
* Add upgrade notes
* Track login token reuse in a Prometheus metric
Signed-off-by: Quentin Gliech <quenting@element.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
underlying DB. (#11635)
Support a unified search query syntax which leverages more of the full-text
search of each database supported by Synapse.
Supports, with the same syntax across Postgresql 11+ and Sqlite:
- quoted "search terms"
- `AND`, `OR`, `-` (negation) operators
- Matching words based on their stem, e.g. searches for "dog" matches
documents containing "dogs".
This is achieved by
- If on postgresql 11+, pass the user input to `websearch_to_tsquery`
- If on sqlite, manually parse the query and transform it into the sqlite-specific
query syntax.
Note that postgresql 10, which is close to end-of-life, falls back to using
`phraseto_tsquery`, which only supports a subset of the features.
Multiple terms separated by a space are implicitly ANDed.
Note that:
1. There is no escaping of full-text syntax that might be supported by the database;
e.g. `NOT`, `NEAR`, `*` in sqlite. This runs the risk that people might discover this
as accidental functionality and depend on something we don't guarantee.
2. English text is assumed for stemming. To support other languages, either the target
language needs to be known at the time of indexing the message (via room metadata,
or otherwise), or a separate index for each language supported could be created.
Sqlite docs: https://www.sqlite.org/fts3.html#full_text_index_queries
Postgres docs: https://www.postgresql.org/docs/11/textsearch-controls.html
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When the last event in a thread is redacted we need to update
the threads table:
* Find the new latest event in the thread and store it into the table; or
* Remove the thread from the table if it is no longer a thread (i.e. all
events in the thread were redacted).
|
| | |
|
| |
| |
| | |
Signed-off-by: Lorenzo Manacorda <lorenzo@mailbox.org>
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Show erasure status when listing users in the Admin API
* Use USING when joining erased_users
* Add changelog entry
* Revert "Use USING when joining erased_users"
This reverts commit 30bd2bf106415caadcfdbdd1b234ef2b106cc394.
* Make the erased check work on postgres
* Add a testcase for showing erased user status
* Appease the style linter
* Explicitly convert `erased` to bool to make SQLite consistent with Postgres
This also adds us an easy way in to fix the other accidentally integered columns.
* Move erasure status test to UsersListTestCase
* Include user erased status when fetching user info via the admin API
* Document the erase status in user_admin_api
* Appease the linter and mypy
* Signpost comments in tests
Co-authored-by: Tadeusz Sośnierz <tadeusz@sosnierz.com>
Co-authored-by: David Robertson <david.m.robertson1@gmail.com>
|
|/
|
|
|
| |
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Erik Johnston <erik@matrix.org>
Co-authored-by: David Robertson <davidr@element.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
whether are near a gap or not (#14215)
Fix MSC3030 `/timestamp_to_event` endpoint returning `outliers` that it has no idea whether are near a gap or not (and therefore unable to determine whether it's actually the closest event). The reason Synapse doesn't know whether an `outlier` is next to a gap is because our gap checks rely on entries in the `event_edges`, `event_forward_extremeties`, and `event_backward_extremities` tables which is [not the case for `outliers`](https://github.com/matrix-org/synapse/blob/2c63cdcc3f1aa4625e947de3c23e0a8133c61286/docs/development/room-dag-concepts.md#outliers).
Also fixes MSC3030 Complement `can_paginate_after_getting_remote_event_from_timestamp_to_event_endpoint` test flake. Although this acted flakey in Complement, if `sync_partial_state` raced and beat us before `/timestamp_to_event`, then even if we retried the failing `/context` request it wouldn't work until we made this Synapse change. With this PR, Synapse will never return an `outlier` event so that test will always go and ask over federation.
Fix https://github.com/matrix-org/synapse/issues/13944
### Why did this fail before? Why was it flakey?
Sleuthing the server logs on the [CI failure](https://github.com/matrix-org/synapse/actions/runs/3149623842/jobs/5121449357#step:5:5805), it looks like `hs2:/timestamp_to_event` found `$NP6-oU7mIFVyhtKfGvfrEQX949hQX-T-gvuauG6eurU` as an `outlier` event locally. Then when we went and asked for it via `/context`, since it's an `outlier`, it was filtered out of the results -> `You don't have permission to access that event.`
This is reproducible when `sync_partial_state` races and persists `$NP6-oU7mIFVyhtKfGvfrEQX949hQX-T-gvuauG6eurU` as an `outlier` before we evaluate `get_event_for_timestamp(...)`. To consistently reproduce locally, just add a delay at the [start of `get_event_for_timestamp(...)`](https://github.com/matrix-org/synapse/blob/cb20b885cb4bd1648581dd043a184d86fc8c7a00/synapse/handlers/room.py#L1470-L1496) so it always runs after `sync_partial_state` completes.
```py
from twisted.internet import task as twisted_task
d = twisted_task.deferLater(self.hs.get_reactor(), 3.5)
await d
```
In a run where it passes, on `hs2`, `get_event_for_timestamp(...)` finds a different event locally which is next to a gap and we request from a closer one from `hs1` which gets backfilled. And since the backfilled event is not an `outlier`, it's returned as expected during `/context`.
With this PR, Synapse will never return an `outlier` event so that test will always go and ask over federation.
|
|
|
|
|
|
| |
And don't include blank opentracing stuff in device list updates.
Signed-off-by: Aaron Raimist <aaron@raim.ist>
|
|
|
|
|
|
|
|
| |
finished) (#14222)
This avoids running a forced-update of a null thread_id rows.
An index is added (in the background) to hopefully make this
easier in the future.
|
|
|
|
| |
a partial join (#14126)
|
| |
|
|
|
|
| |
(#14161)
|
|
|
| |
Gated behind an experimental configuration flag.
|
|
|
|
|
| |
This should fix a race where the event notification comes in over
replication before the state replication, leaving a window during
which a sync may get an incorrect list of rooms for the user.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
invalid (#13816)
While https://github.com/matrix-org/synapse/pull/13635 stops us from doing the slow thing after we've already done it once, this PR stops us from doing one of the slow things in the first place.
Related to
- https://github.com/matrix-org/synapse/issues/13622
- https://github.com/matrix-org/synapse/pull/13635
- https://github.com/matrix-org/synapse/issues/13676
Part of https://github.com/matrix-org/synapse/issues/13356
Follow-up to https://github.com/matrix-org/synapse/pull/13815 which tracks event signature failures.
With this PR, we avoid the call to the costly `_get_state_ids_after_missing_prev_event` because the signature failure will count as an attempt before and we filter events based on the backoff before calling `_get_state_ids_after_missing_prev_event` now.
For example, this will save us 156s out of the 185s total that this `matrix.org` `/messages` request. If you want to see the full Jaeger trace of this, you can drag and drop this `trace.json` into your own Jaeger, https://gist.github.com/MadLittleMods/4b12d0d0afe88c2f65ffcc907306b761
To explain this exact scenario around `/messages` -> backfill, we call `/backfill` and first check the signatures of the 100 events. We see bad signature for `$luA4l7QHhf_jadH3mI-AyFqho0U2Q-IXXUbGSMq6h6M` and `$zuOn2Rd2vsC7SUia3Hp3r6JSkSFKcc5j3QTTqW_0jDw` (both member events). Then we process the 98 events remaining that have valid signatures but one of the events references `$luA4l7QHhf_jadH3mI-AyFqho0U2Q-IXXUbGSMq6h6M` as a `prev_event`. So we have to do the whole `_get_state_ids_after_missing_prev_event` rigmarole which pulls in those same events which fail again because the signatures are still invalid.
- `backfill`
- `outgoing-federation-request` `/backfill`
- `_check_sigs_and_hash_and_fetch`
- `_check_sigs_and_hash_and_fetch_one` for each event received over backfill
- ❗ `$luA4l7QHhf_jadH3mI-AyFqho0U2Q-IXXUbGSMq6h6M` fails with `Signature on retrieved event was invalid.`: `unable to verify signature for sender domain xxx: 401: Failed to find any key to satisfy: _FetchKeyRequest(...)`
- ❗ `$zuOn2Rd2vsC7SUia3Hp3r6JSkSFKcc5j3QTTqW_0jDw` fails with `Signature on retrieved event was invalid.`: `unable to verify signature for sender domain xxx: 401: Failed to find any key to satisfy: _FetchKeyRequest(...)`
- `_process_pulled_events`
- `_process_pulled_event` for each validated event
- ❗ Event `$Q0iMdqtz3IJYfZQU2Xk2WjB5NDF8Gg8cFSYYyKQgKJ0` references `$luA4l7QHhf_jadH3mI-AyFqho0U2Q-IXXUbGSMq6h6M` as a `prev_event` which is missing so we try to get it
- `_get_state_ids_after_missing_prev_event`
- `outgoing-federation-request` `/state_ids`
- ❗ `get_pdu` for `$luA4l7QHhf_jadH3mI-AyFqho0U2Q-IXXUbGSMq6h6M` which fails the signature check again
- ❗ `get_pdu` for `$zuOn2Rd2vsC7SUia3Hp3r6JSkSFKcc5j3QTTqW_0jDw` which fails the signature check
|
|\ |
|
| | |
|
| |
| |
| | |
Co-authored-by: Erik Johnston <erik@matrix.org>
|
| |
| |
| |
| |
| | |
Broke by #14045. Fixes #14120.
Introduced in v1.69.0rc2.
|
| |
| |
| |
| | |
Ensure that the upsert will work properly by first updating any existing
rows (in the same way that the background update to backfill data works).
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The root node of a thread (and events related to it) are considered
"part of a thread" when validating receipts. This allows clients which
show the root node in both the main timeline and the threaded timeline
to easily send receipts in either.
Note that threaded notifications are not created for these events, these
events created notifications on the main timeline.
|
| |
| |
| |
| |
| |
| |
| | |
The callers either set a default limit or manually handle a None-limit
later on (by setting a default value).
Update the callers to always instantiate PaginationConfig with a default
limit and then assume the limit is non-None.
|
| |
| |
| | |
This was missed in 2b6d41ebd685fb546e52acdbcb0024dfcf5a5db1 (#13824).
|
| | |
|
| |
| |
| |
| |
| | |
Fix a broken conflict in e6e876b9b158f47811b6dfedd8783f658ce960a4,
by not stomping over a field right after creating it.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Implement the /threads endpoint from MSC3856.
This is currently unstable and behind an experimental configuration
flag.
It includes a background update to backfill data, results from
the /threads endpoint will be partial until that finishes.
|
| |
| |
| |
| |
| | |
A receipt's thread ID, if one exists, should be added to the
body of a receipt.
|
| |
| |
| |
| | |
Fixes a bug where threaded receipts could not be sent for the
main timeline.
|
| | |
|
| |
| |
| | |
MSC3772 has been abandoned.
|
| |
| |
| |
| | |
have the original event. (#13813)
|
| |
| |
| |
| |
| |
| |
| | |
Fixes two related bugs:
* No edit information was bundled for events which aren't `m.room.message`.
* `m.new_content` was not applied for those events.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes two related bugs:
* The handling of `[null]` for a `room_types` filter was incorrect.
* The ordering of arguments when providing both a network tuple
and room type field was incorrect.
|
| |
| |
| |
| | |
Update the HTTP and email pushers to consider threaded read receipts
when fetching unread events.
|
| |
| |
| |
| |
| |
| | |
Consider an event to be part of a thread if you can follow a
chain of relations up to a thread root.
Part of MSC3773 & MSC3771.
|
| |
| |
| |
| | |
Applies the proper logic for unthreaded and threaded receipts to either
apply to all events in the room or only events in the same thread, respectively.
|
| |
| |
| |
| |
| |
| |
| |
| | |
When retrieving counts of notifications segment the results based on the
thread ID, but choose whether to return them as individual threads or as
a single summed field by letting the client opt-in via a sync flag.
The summarization code is also updated to be per thread, instead of per
room.
|
|/
|
|
|
|
| |
Switches to the stable identifier for MSC3786 and enables it
by default.
This disables pushes of m.room.server_acl events.
|
|
|
| |
On matrix.org we have ~5 million stale rows in `event_push_actions_staging`, let's add a background job to make sure we clear them out.
|
|
|
| |
Introduced in #13719
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Instead of running a single large query, run a single query for
user-only lookups and additional queries for batches of user device
lookups.
Resolves #13580.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
| |
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
|
|
|
|
| |
This reverts commit 6d543d6d9f56e39199b7e460d0081b02d61f12be.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Update mypy and mypy-zope
* Unignore assigning to LogRecord attributes
Presumably https://github.com/python/typeshed/pull/8064 makes this ok
Cherry-picked from #13521
* Remove unused ignores due to mypy ParamSpec fixes
https://github.com/python/mypy/pull/12668
Cherry-picked from #13521
* Remove additional unused ignores
* Fix new mypy complaints related to `assertGreater`
Presumably due to https://github.com/python/typeshed/pull/8077
* Changelog
* Reword changelog
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
|
|
|
|
|
| |
Fixes #13942. Introduced in #13575.
Basically, let's only get the ordered set of hosts out of the DB if we need an ordered set of hosts. Since we split the function up the caching won't be as good, but I think it will still be fine as e.g. multiple backfill requests for the same room will hit the cache.
|
|
|
|
|
|
|
|
| |
* Reproduce bug
* Compute `least_function` first
* Substitute `least_function` with an f-string
* Bugfix: avoid overflow
Co-authored-by: Eric Eastwood <erice@element.io>
|
| |
|
| |
|
|
|
|
| |
used (using MSC3866) (#13556)
|
| |
|
|
|
|
|
| |
By renaming it and updating the docstring.
Additionally, refactors a method which is used only by tests.
|
| |
|
|
|
|
|
|
|
|
|
| |
There is no need to grab thousands of backfill points when we only need 5 to make the `/backfill` request with. We need to grab a few extra in case the first few aren't visible in the history.
Previously, we grabbed thousands of backfill points from the database, then sorted and filtered them in the app. Fetching the 4.6k backfill points for `#matrix:matrix.org` from the database takes ~50ms - ~570ms so it's not like this saves a lot of time 🤷. But it might save us more time now that `get_backfill_points_in_room`/`get_insertion_event_backward_extremities_in_room` are more complicated after https://github.com/matrix-org/synapse/pull/13635
This PR moves the filtering and limiting to the SQL query so we just have less data to work with in the first place.
Part of https://github.com/matrix-org/synapse/issues/13356
|
|
|
|
|
|
|
| |
(#13933)" (#13935)
This reverts commit 7766bd5b354cd4ea1a33351ba320e54a14d3aeac (#13933).
The unused column is actually used, but much further down in the function.
|
| |
|
|
|
|
|
|
|
| |
c.f. #12993 (comment), point 3
This stores all device list updates that we receive while partial joins are ongoing, and processes them once we have the full state.
Note: We don't actually process the device lists in the same ways as if we weren't partially joined. Instead of updating the device list remote cache, we simply notify local users that a change in the remote user's devices has happened. I think this is safe as if the local user requests the keys for the remote user and we don't have them we'll simply fetch them as normal.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix https://github.com/matrix-org/synapse/issues/13856
Fix https://github.com/matrix-org/synapse/issues/13865
> Discovered while trying to make Synapse fast enough for [this MSC2716 test for importing many batches](https://github.com/matrix-org/complement/pull/214#discussion_r741678240). As an example, disabling the `have_seen_event` cache saves 10 seconds for each `/messages` request in that MSC2716 Complement test because we're not making as many federation requests for `/state` (speeding up `have_seen_event` itself is related to https://github.com/matrix-org/synapse/issues/13625)
>
> But this will also make `/messages` faster in general so we can include it in the [faster `/messages` milestone](https://github.com/matrix-org/synapse/milestone/11).
>
> *-- https://github.com/matrix-org/synapse/issues/13856*
### The problem
`_invalidate_caches_for_event` doesn't run in monolith mode which means we never even tried to clear the `have_seen_event` and other caches. And even in worker mode, it only runs on the workers, not the master (AFAICT).
Additionally there was bug with the key being wrong so `_invalidate_caches_for_event` never invalidates the `have_seen_event` cache even when it does run.
Because we were using the `@cachedList` wrong, it was putting items in the cache under keys like `((room_id, event_id),)` with a `set` in a `set` (ex. `(('!TnCIJPKzdQdUlIyXdQ:test', '$Iu0eqEBN7qcyF1S9B3oNB3I91v2o5YOgRNPwi_78s-k'),)`) and we we're trying to invalidate with just `(room_id, event_id)` which did nothing.
|
| |
|
|
|
|
| |
(#13885)
|
|
|
|
|
| |
* Adds a docstring.
* Reduces a small amount of duplicated code.
* Improves tests.
|
|
|
| |
Including another batch of fixes to the schema dump script
|
|
|
|
|
| |
This moves all the invalidations into a single place and de-duplicates
the code involved in invalidating caches for a given event by using
the base class method.
|
|
|
|
|
|
|
|
|
|
| |
Only try to backfill event if we haven't tried before recently (exponential backoff). No need to keep trying the same backfill point that fails over and over.
Fix https://github.com/matrix-org/synapse/issues/13622
Fix https://github.com/matrix-org/synapse/issues/8451
Follow-up to https://github.com/matrix-org/synapse/pull/13589
Part of https://github.com/matrix-org/synapse/issues/13356
|
|
|
|
|
|
|
|
|
| |
Part of the work for #12993.
Once #12993 is fully resolved, we expect `/keys/changes` to behave
sensibly when joined to a room with partial state.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
|
| |
Updates the `/receipts` endpoint and receipt EDU handler to parse a
`thread_id` from the body and insert it in the database.
|
|
|
|
|
|
|
|
|
|
|
| |
Use the provided list of servers in the room from the `/send_join`
response, since we will not know which users are in the room. This
isn't sufficient to ensure that all remote servers receive the right
device list updates, since the `/send_join` response may be inaccurate
or we may calculate the membership state of new users in the room
incorrectly.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
|
|
| |
This fixes a bug where the `/relations` API with `dir=f` would
skip the first item of each page (except the first page), causing
incomplete data to be returned to the client.
|
|
|
| |
Second half of the MSC3881 implementation
|
|
|
| |
Partial implementation of MSC3881
|
|
|
| |
Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
|
|
|
|
|
|
|
| |
* Generate separate snapshots for sqlite, postgres and common
* Cleanup postgres dbs in the TRAP
* Say which logical DB we're applying updates to
* Run background updates on the state DB
* Add new option for accepting a SCHEMA_NUMBER
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
(#13825)
`event_failed_pull_attempts` added in https://github.com/matrix-org/synapse/pull/13589
MSC2716 related tables added in:
- https://github.com/matrix-org/synapse/pull/10245/files#diff-3d42dfb44d02f7de3aada105e0bdc1cc9dd7f953cbf0f36c5d0f50827bf0320aR1
- Renamed in https://github.com/matrix-org/synapse/pull/10838/files#diff-2730bfbe9e688b55e46f9371aefe67dac2bd2b2b7d9d6b92774eea1fcfae156dR1
- https://github.com/matrix-org/synapse/pull/10498/files#diff-c52bbfbb5921a3f6f023b24343668479d966fac164f13b7c39d2197ce3afa7a5R1
|
|
|
|
| |
This is useful to upsert against a table which has a unique
partial index while avoiding conflicts.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We can follow-up this PR with:
1. Only try to backfill from an event if we haven't tried recently -> https://github.com/matrix-org/synapse/issues/13622
1. When we decide to backfill that event again, process it in the background so it doesn't block and make `/messages` slow when we know it will probably fail again -> https://github.com/matrix-org/synapse/issues/13623
1. Generally track failures everywhere we try and fail to pull an event over federation -> https://github.com/matrix-org/synapse/issues/13700
Fix https://github.com/matrix-org/synapse/issues/13621
Part of https://github.com/matrix-org/synapse/issues/13356
Mentioned in [internal doc](https://docs.google.com/document/d/1lvUoVfYUiy6UaHB6Rb4HicjaJAU40-APue9Q4vzuW3c/edit#bookmark=id.qv7cj51sv9i5)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds a `thread_id` column to the `event_push_actions`, `event_push_actions_staging`,
and `event_push_summary` tables. This will notifications to be segmented by the thread
in a future pull request. The `thread_id` column stores the root event ID or the special
value `"main"`.
The `thread_id` column for `event_push_actions` and `event_push_summary` is
backfilled with `"main"` for all existing rows. New entries into `event_push_actions`
and `event_push_actions_staging` will get the proper thread ID.
`receipts_linearized` and `receipts_graph` also gain a `thread_id` column, which is similar,
except `NULL` is a special value meaning the receipt is "unthreaded".
See MSC3771 and MSC3773 for where this data will be useful.
|
|
|
|
|
|
|
| |
Partial indices have been supported since SQLite 3.8, but Synapse
now requires >= 3.27, so we can enable support for them.
This requires rebuilding previous indices which were partial on
PostgreSQL, but not on SQLite.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove incorrect migration file from `state` logical DB
The table `ex_outlier_stream` is part of the `main` logical DB; it
should not have been created in the `state` logical DB. We remove this
migration now as a tidy-up.
Note: we cannot `DROP TABLE IF EXISTS ex_outlier_stream` in a new
migration, because some (most) instances of Synapse host both of these
logical DBs on the same DB cluster.
* Changelog
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a remote user leaves the last room shared with the homeserver, we
have to mark their device list as unsubscribed, otherwise we would hold
on to a stale device list in our cache. Crucially, the device list would
remain cached even after the remote user rejoined the room, which could
lead to E2EE failures until the next change to the remote user's device
list.
Fixes #13651.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
|
| |
Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
|
| |
|
| |
|
|
|
|
|
|
|
| |
* Remove checks for membership column in current_state_events
* Add schema script to force through the
`current_state_events_membership` background job
Contributed by Nick @ Beeper (@fizzadar).
|
|
|
|
|
|
| |
Instead of a delete, then insert.
This was previously done for `receipts_linearized` in
2dc430d36ef793b38d6d79ec8db4ea60588df2ee (#7607).
|
| |
|
|
|
| |
Co-authored-by: reivilibre <olivier@librepush.net>
|
|
|
|
|
|
|
| |
Update the docstrings for `get_users_in_room` and
`get_current_hosts_in_room` to explain the impact of partial state.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
|
|
|
|
|
|
|
| |
(#13748)
Handle malformed user IDs with no colons in `get_current_hosts_in_room`.
It's not currently possible for a malformed user ID to join a room, so
this error would never be hit.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
|
| |
When backfilling, `_get_state_ids_after_missing_prev_event` calls [`get_metadata_for_events`](https://github.com/matrix-org/synapse/blob/26bc26586b4b95d63ce7e453e9312469843f796e/synapse/handlers/federation_event.py#L1133). For `#matrix:matrix.org`, it's called with 77k `state_events` which means 77 calls to the database and takes 28 seconds.
|
| |
|
|
|
|
| |
version numbers. (#13706)
|
|
|
|
| |
Otherwise they'll be leaked due to the filtering code only respecting
the stable identifiers for private read receipts.
|
|
|
| |
Fixes #13613.
|
|
|
| |
Signed-off-by: Šimon Brandner <simon.bra.ag@gmail.com>
|
|
|
|
|
|
|
| |
The method doesn't actually do any data fetching and the method that
does, `_get_joined_profile_from_event_id`, has its own cache.
Signed off by Nick @ Beeper (@Fizzadar).
|
| |
|
|
|
|
|
|
|
|
|
| |
MSC3030 (#13658)
Discovered while working on https://github.com/matrix-org/synapse/pull/13589 and I had all the messages at the same timestamp in the tests.
Part of https://github.com/matrix-org/matrix-spec-proposals/pull/3030
Complement tests: https://github.com/matrix-org/complement/pull/457
|
| |
|
|
|
| |
By using `execute_values` instead of `execute_batch`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Optimize how we calculate `likely_domains` during backfill because I've seen this take 17s in production just to `get_current_state` which is used to `get_domains_from_state` (see case [*2. Loading tons of events* in the `/messages` investigation issue](https://github.com/matrix-org/synapse/issues/13356)).
There are 3 ways we currently calculate hosts that are in the room:
1. `get_current_state` -> `get_domains_from_state`
- Used in `backfill` to calculate `likely_domains` and `/timestamp_to_event` because it was cargo-culted from `backfill`
- This one is being eliminated in favor of `get_current_hosts_in_room` in this PR 🕳
1. `get_current_hosts_in_room`
- Used for other federation things like sending read receipts and typing indicators
1. `get_hosts_in_room_at_events`
- Used when pushing out events over federation to other servers in the `_process_event_queue_loop`
Fix https://github.com/matrix-org/synapse/issues/13626
Part of https://github.com/matrix-org/synapse/issues/13356
Mentioned in [internal doc](https://docs.google.com/document/d/1lvUoVfYUiy6UaHB6Rb4HicjaJAU40-APue9Q4vzuW3c/edit#bookmark=id.2tvwz3yhcafh)
### Query performance
#### Before
The query from `get_current_state` sucks just because we have to get all 80k events. And we see almost the exact same performance locally trying to get all of these events (16s vs 17s):
```
synapse=# SELECT type, state_key, event_id FROM current_state_events WHERE room_id = '!OGEhHVWSdvArJzumhm:matrix.org';
Time: 16035.612 ms (00:16.036)
synapse=# SELECT type, state_key, event_id FROM current_state_events WHERE room_id = '!OGEhHVWSdvArJzumhm:matrix.org';
Time: 4243.237 ms (00:04.243)
```
But what about `get_current_hosts_in_room`: When there is 8M rows in the `current_state_events` table, the previous query in `get_current_hosts_in_room` took 13s from complete freshness (when the events were first added). But takes 930ms after a Postgres restart or 390ms if running back to back to back.
```sh
$ psql synapse
synapse=# \timing on
synapse=# SELECT COUNT(DISTINCT substring(state_key FROM '@[^:]*:(.*)$'))
FROM current_state_events
WHERE
type = 'm.room.member'
AND membership = 'join'
AND room_id = '!OGEhHVWSdvArJzumhm:matrix.org';
count
-------
4130
(1 row)
Time: 13181.598 ms (00:13.182)
synapse=# SELECT COUNT(*) from current_state_events where room_id = '!OGEhHVWSdvArJzumhm:matrix.org';
count
-------
80814
synapse=# SELECT COUNT(*) from current_state_events;
count
---------
8162847
synapse=# SELECT pg_size_pretty( pg_total_relation_size('current_state_events') );
pg_size_pretty
----------------
4702 MB
```
#### After
I'm not sure how long it takes from complete freshness as I only really get that opportunity once (maybe restarting computer but that's cumbersome) and it's not really relevant to normal operating times. Maybe you get closer to the fresh times the more access variability there is so that Postgres caches aren't as exact. Update: The longest I've seen this run for is 6.4s and 4.5s after a computer restart.
After a Postgres restart, it takes 330ms and running back to back takes 260ms.
```sh
$ psql synapse
synapse=# \timing on
Timing is on.
synapse=# SELECT
substring(c.state_key FROM '@[^:]*:(.*)$') as host
FROM current_state_events c
/* Get the depth of the event from the events table */
INNER JOIN events AS e USING (event_id)
WHERE
c.type = 'm.room.member'
AND c.membership = 'join'
AND c.room_id = '!OGEhHVWSdvArJzumhm:matrix.org'
GROUP BY host
ORDER BY min(e.depth) ASC;
Time: 333.800 ms
```
#### Going further
To improve things further we could add a `limit` parameter to `get_current_hosts_in_room`. Realistically, we don't need 4k domains to choose from because there is no way we're going to query that many before we a) probably get an answer or b) we give up.
Another thing we can do is optimize the query to use a index skip scan:
- https://wiki.postgresql.org/wiki/Loose_indexscan
- Index Skip Scan, https://commitfest.postgresql.org/37/1741/
- https://www.timescale.com/blog/how-we-made-distinct-queries-up-to-8000x-faster-on-postgresql/
|
|
|
|
|
| |
first (`get_users_in_room` mis-use) (#13608)
See https://github.com/matrix-org/synapse/pull/13575#discussion_r953023755
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and avoid errors caused by sorting alphabetical instance name which can be `null` (#13585)
When loading current ids, sort by stream ID so that we don't want to overwrite the `current_position` of an instance to a lower stream ID than we're actually at ([discussion](https://github.com/matrix-org/synapse/pull/13585#discussion_r951795379)). Previously, it sorted alphabetically by instance name which can be `null` and throw errors but more importantly, accomplishes nothing.
Fixes the following startup error which is why I started looking into this area:
```
$ poetry run synapse_homeserver --config-path homeserver.yaml
****************************************************************
Error during initialisation:
'<' not supported between instances of 'NoneType' and 'str'
There may be more information in the logs.
****************************************************************
```
Somehow my database ended up looking like the following, notice the `instance_name` is `null` in the db, and we can't sort `NoneType` things. Another question is why do we see the `instance_name` as `null` sometimes instead of `master` in monolith mode?
```
$ psql synapse
synapse=# SELECT * FROM stream_positions;
stream_name | instance_name | stream_id
-----------------+---------------+-----------
account_data | master | 1242
events | master | 1787
to_device | master | 58
presence_stream | master | 485638
receipts | master | 341
backfill | master | -139106
(6 rows)
synapse=# SELECT instance_name, stream_id FROM receipts_linearized;
instance_name | stream_id
---------------+-----------
| 211
| 3
| 4
| 212
| 213
| 224
| 228
| 164
| 313
| 253
| 38
| 321
| 324
| 189
| 192
| 193
| 194
| 195
| 197
| 198
| 275
| 79
| 339
| 340
| 82
| 341
| 84
| 85
| 91
| 119
```
|
| |
|
|
|
| |
Broke in #13573.
|
| |
|
|
|
| |
The profile objects are never used and increase cache size significantly.
|
|
|
|
|
|
|
|
|
| |
`Requester` instead of the `UserID` (#13024)
Part of #13019
This changes all the permission-related methods to rely on the Requester instead of the UserID. This is a first step towards enabling scoped access tokens at some point, since I expect the Requester to have scope-related informations in it.
It also changes methods which figure out the user/device/appservice out of the access token to return a Requester instead of something else. This avoids having store-related objects in the methods signatures.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use a state filter or accept partial state in a few places where we
request state, to avoid blocking.
To make lazy-loading `/sync`s work, we need to provide the memberships
of event senders, which are not guaranteed to be in the room state.
Instead we dig through auth events for memberships to present to
clients. The auth events of an event are guaranteed to contain a
passable membership event, otherwise the event would have been rejected.
Note that this only covers the common code paths encountered during
testing. There has been no exhaustive checking of all sync code paths.
Fixes #13146.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
|
|
| |
could be larger than the number of results you can actually query for. (#13525)
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
|
| |
|
|
|
|
|
|
|
|
|
| |
Instrument the federation/backfill part of `/messages` so it's easier to follow what's going on in Jaeger when viewing a trace.
Split out from https://github.com/matrix-org/synapse/pull/13440
Follow-up from https://github.com/matrix-org/synapse/pull/13368
Part of https://github.com/matrix-org/synapse/issues/13356
|
|
|
|
| |
stated. (#13514)
|
|
|
|
|
|
|
|
|
|
|
|
| |
This improves load times for push rules:
| Version | Time per user | Time for 1k users |
| -------------------- | ------------- | ----------------- |
| Before | 138 µs | 138ms |
| Now (with custom) | 2.11 µs | 2.11ms |
| Now (without custom) | 49.7 ns | 0.05 ms |
This therefore has a large impact on send times for rooms
with large numbers of local users in the room.
|
|
|
| |
Instrument FederationStateIdsServlet - `/state_ids` so it's easier to follow what's going on in Jaeger when viewing a trace.
|
|
|
|
|
|
|
|
| |
This reverts commit f383b9b3eceaa082d5ae690550fe41460b711779. Other PRs
were seeing mypy failures that looked to be related to mypy-zope.
Confusingly, we didn't see this on #13521.
Revert this for now and investigate later.
|
|
|
|
|
|
|
|
| |
* Clarifies comments.
* Fixes an erroneous comment (about return type) added in #13455
(ec24813220f9d54108924dc04aecd24555277b99).
* Clarifies the name of a variable.
* Simplifies logic of pulling out the latest join for the requesting user.
|
| |
|
|
|
|
|
| |
Events can be un-rejected or newly-rejected during resync, so ensure we update
the database and caches when that happens.
|
|
|
|
|
| |
This adds support for the stable identifiers of MSC2285 while
continuing to support the unstable identifiers behind the configuration
flag. These will be removed in a future version.
|
| |
|
|
|
|
|
|
|
|
|
| |
(#13455)
* Adds docstrings and inline comments.
* Formats SQL queries using triple quoted strings.
* Minor formatting changes.
* Avoid fetching `event_push_summary_stream_ordering` multiple times
in the same transactions.
|
| |
|
|
|
|
|
|
| |
Still maintains local in memory lookup optimisation, but does any external
lookup as part of the deferred that prevents duplicate lookups for the same
event at once. This makes the assumption that fetching from an external
cache is a non-zero load operation.
|
|
|
|
|
|
| |
In Jaeger:
- Before: huge list of uncategorized database calls
- After: nice and collapsible into units of work
|
|
|
|
|
|
|
|
| |
Previously, `_resolve_state_at_missing_prevs` returned the resolved
state before an event and a partial state flag. These were unwieldy to
carry around would only ever be used to build an event context. Build
the event context directly instead.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
|
|
| |
Make sure that we re-check the auth rules during state resync, otherwise
rejected events get un-rejected.
|
|
|
|
|
|
| |
(#13370)
Signed-off-by: Šimon Brandner <simon.bra.ag@gmail.com>
|
|
|
|
|
| |
Make sure that we only pull out events from the db once they have no
prev-events with partial state.
|
|
|
|
|
|
|
|
|
| |
(#13355)
Avoid blocking on full state in `_resolve_state_at_missing_prevs` and
return a new flag indicating whether the resolved state is partial.
Thread that flag around so that it makes it into the event context.
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
|
| |
|
|
|
|
|
|
|
|
| |
See #10826 and #10786 for context as to why we had to disable pruning on
those caches.
Now that `get_users_who_share_room_with_user` is called frequently only
for presence, we just need to make calls to it less frequent and then we
can remove the various levels of caching that is going on.
|
| |
|
| |
|
|
|
| |
After this change `synapse.logging` is fully typed.
|
|
|
| |
This comes from two identical definitions in each of the base stores, and means the base slaved store is now empty and can be removed.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update `get_pdu` to return the untouched, pristine `EventBase` as it was originally seen over federation (no metadata added). Previously, we returned the same `event` reference that we stored in the cache which downstream code modified in place and added metadata like setting it as an `outlier` and essentially poisoned our cache. Now we always return a copy of the `event` so the original can stay pristine in our cache and re-used for the next cache call.
Split out from https://github.com/matrix-org/synapse/pull/13205
As discussed at:
- https://github.com/matrix-org/synapse/pull/13205#discussion_r918365746
- https://github.com/matrix-org/synapse/pull/13205#discussion_r918366125
Related to https://github.com/matrix-org/synapse/issues/12584. This PR doesn't fix that issue because it hits [`get_event` which exists from the local database before it tries to `get_pdu`](https://github.com/matrix-org/synapse/blob/7864f33e286dec22368dc0b11c06eebb1462a51e/synapse/federation/federation_client.py#L581-L594).
|
|
|
|
| |
Functions that are decorated with `trace` are now properly typed
and the type hints for them are fixed.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Fix race conditions in the async cache invalidation logic, by separating
the async & local invalidation calls and ensuring any async call i
executed first.
Signed off by Nick @ Beeper (@Fizzadar).
|
|
|
|
| |
`_get_joined_profiles_from_event_ids`. (#13300)
|
| |
|
|
|
| |
This reverts commit 5d4028f217f178fcd384d5bfddd92225b4e78c51.
|
|
|
|
|
| |
To close: #10294.
Signed off by Nick @ Beeper.
|
|
|
|
|
| |
More prep work for asyncronous caching, also makes all process_replication_rows methods consistent (presence handler already is so).
Signed off by Nick @ Beeper (@Fizzadar)
|
| |
|
|
|
|
|
| |
These columns were added back in Synapse 1.52, and have been populated for new
events since then. It's now (beyond) time to back-populate them for existing
events.
|
|
|
|
|
| |
There are two fixes here:
1. A long-standing bug where we incorrectly calculated `delta_ids`; and
2. A bug introduced in #13267 where we got current state incorrect.
|
|
|
|
|
| |
Some experimental prep work to enable external event caching based on #9379 & #12955. Doesn't actually move the cache at all, just lays the groundwork for async implemented caches.
Signed off by Nick @ Beeper (@Fizzadar)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Replace `get_new_events_for_appservice` with `get_all_new_events_stream`
The functions were near identical and this brings the AS worker closer
to the way federation senders work which can allow for multiple workers
to handle AS traffic.
* Pull received TS alongside events when processing the stream
This avoids an extra query -per event- when both federation sender
and appservice pusher process events.
|
| |
|
|
|
|
| |
These tables have been unused since Synapse v1.61.0, although schema version 72
was added in Synapse v1.62.0.
|
|
|
| |
This is unused since Synapse 1.60.0 (#12679). It's time for it to go.
|
|
|
|
| |
The stack is already logged when waiting for an event to be un-partial
stated. Log the stack for rooms as well, to aid in debugging.
|
| |
|
| |
|
|
|
|
| |
We want to be as up to date as possible, and sleeping doesn't help here
and can mean we fall behind.
|
|
|
|
|
| |
Fixes #13196
Broke by #13005
|
|
|
|
|
|
|
|
|
|
|
| |
Bounce recalculation of current state to the correct event persister and
move recalculation of current state into the event persistence queue, to
avoid concurrent updates to a room's current state.
Also give recalculation of a room's current state a real stream
ordering.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
|
|
| |
This happened if we encountered a stream ordering in `event_push_actions` that had more rows than the batch size of the delete, as If we don't delete any rows in an iteration then the next time round we get the exact same stream ordering and get stuck.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Whenever we want to persist an event, we first compute an event context,
which includes the state at the event and a flag indicating whether the
state is partial. After a lot of processing, we finally try to store the
event in the database, which can fail for partial state events when the
containing room has been un-partial stated in the meantime.
We detect the race as a foreign key constraint failure in the data store
layer and turn it into a special `PartialStateConflictError` exception,
which makes its way up to the method in which we computed the event
context.
To make things difficult, the exception needs to cross a replication
request: `/fed_send_events` for events coming over federation and
`/send_event` for events from clients. We transport the
`PartialStateConflictError` as a `409 Conflict` over replication and
turn `409`s back into `PartialStateConflictError`s on the worker making
the request.
All client events go through
`EventCreationHandler.handle_new_client_event`, which is called in
*a lot* of places. Instead of trying to update all the code which
creates client events, we turn the `PartialStateConflictError` into a
`429 Too Many Requests` in
`EventCreationHandler.handle_new_client_event` and hope that clients
take it as a hint to retry their request.
On the federation event side, there are 7 places which compute event
contexts. 4 of them use outlier event contexts:
`FederationEventHandler._auth_and_persist_outliers_inner`,
`FederationHandler.do_knock`, `FederationHandler.on_invite_request` and
`FederationHandler.do_remotely_reject_invite`. These events won't have
the partial state flag, so we do not need to do anything for then.
The remaining 3 paths which create events are
`FederationEventHandler.process_remote_join`,
`FederationEventHandler.on_send_membership_event` and
`FederationEventHandler._process_received_pdu`.
We can't experience the race in `process_remote_join`, unless we're
handling an additional join into a partial state room, which currently
blocks, so we make no attempt to handle it correctly.
`on_send_membership_event` is only called by
`FederationServer._on_send_membership_event`, so we catch the
`PartialStateConflictError` there and retry just once.
`_process_received_pdu` is called by `on_receive_pdu` for incoming
events and `_process_pulled_event` for backfill. The latter should never
try to persist partial state events, so we ignore it. We catch the
`PartialStateConflictError` in `on_receive_pdu` and retry just once.
Refering to the graph of code paths in
https://github.com/matrix-org/synapse/issues/12988#issuecomment-1156857648
may make the above make more sense.
Signed-off-by: Sean Quah <seanq@matrix.org>
|
| |
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.62.0rc3 (2022-07-04)
==============================
Bugfixes
--------
- Update the version of the [ldap3 plugin](https://github.com/matrix-org/matrix-synapse-ldap3/) included in the `matrixdotorg/synapse` DockerHub images and the Debian packages hosted on `packages.matrix.org` to 0.2.1. This fixes [a bug](https://github.com/matrix-org/matrix-synapse-ldap3/pull/163) with usernames containing uppercase characters. ([\#13156](https://github.com/matrix-org/synapse/issues/13156))
- Fix a bug introduced in Synapse 1.62.0rc1 affecting unread counts for users on small servers. ([\#13168](https://github.com/matrix-org/synapse/issues/13168))
|
| | |
|