| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function is used for two purposes: 1) for subscribers of streams to
get a token they can use to get further updates with, and 2) for
replication to track position of the writers of the stream.
For streams with a single writer the two scenarios produce the same
result, however the situation becomes complicated for streams with
multiple writers. The current `MultiWriterIdGenerator` does not
correctly handle the first case (which is not an issue as its only used
for the `caches` stream which nothing subscribes to outside of
replication).
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
async/await (#8063)
|
| |
|
|\
| |
| | |
Change HomeServer definition to work with typing.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Duplicating function signatures between server.py and server.pyi is
silly. This commit changes that by changing all `build_*` methods to
`get_*` methods and changing the `_make_dependency_method` to work work
as a descriptor that caches the produced value.
There are some changes in other files that were made to fix the typing
in server.py.
|
| | |
|
|/ |
|
|\
| |
| | |
With an undocumented configuration setting to enable them for specific users.
|
| |\
| | |
| | |
| | | |
babolivier/new_push_rules
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| |\ \
| | | |
| | | |
| | | | |
babolivier/new_push_rules
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
database to async (#8042)
|
| | | | |
|
| | | | |
|
| | | | |
|
| |_|/
|/| | |
|
| | | |
|
| | | |
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
`StatsHandler` handles updates to the `current_state_delta_stream`, and updates room stats such as the amount of state events, joined users, etc.
However, it counts every new join membership as a new user entering a room (and that user being in another room), whereas it's possible for a user's membership status to go from join -> join, for instance when they change their per-room profile information.
This PR adds a check for join->join membership transitions, and bails out early, as none of the further checks are necessary at that point.
Due to this bug, membership stats in many rooms have ended up being wildly larger than their true values. I am not sure if we also want to include a migration step which recalculates these statistics (possibly using the `_populate_stats_process_rooms` bg update).
Bug introduced in the initial implementation https://github.com/matrix-org/synapse/pull/4338.
|
|\ \ |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | | |
Idea from matrix-org/synapse-dinsic#49
|
|/ / |
|
| | |
|
|/ |
|
|
|
|
|
| |
It serves no purpose and updating everytime we write to the device inbox
stream means all such transactions will conflict, causing lots of
transaction failures and retries.
|
|
|
|
| |
It's somewhat expected for us to have unknown room versions in the
database due to room version experiments.
|
|
|
|
| |
objects. (#7849)
|
|\
| |
| | |
Fix guest user registration with lots of client readers
|
| | |
|
| | |
|
| |
| |
| |
| | |
partly just to show it works, but alwo to remove a bit of code duplication.
|
| | |
|
|/
|
|
|
|
|
|
| |
When considering rooms to clean up in `delete_old_current_state_events`, skip
rooms which we are creating, which otherwise look a bit like rooms we have
left.
Fixes #7834.
|
|
|
|
|
|
|
|
|
| |
As far as I can tell from the sentry logs, the only time this has actually done
anything in the last two years is when we had two master workers running at
once, and even then, it made a bit of a mess of it (see
https://github.com/matrix-org/synapse/issues/7845#issuecomment-658238739).
Generally I feel like this code is doing more harm than good.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Delete Room admin API allows server admins to remove rooms from server
and block these rooms.
`DELETE /_synapse/admin/v1/rooms/<room_id>`
It is a combination and improvement of "[Shutdown room](https://github.com/matrix-org/synapse/blob/develop/docs/admin_api/shutdown_room.md)" and "[Purge room](https://github.com/matrix-org/synapse/blob/develop/docs/admin_api/purge_room.md)" API.
Fixes: #6425
It also fixes a bug in [synapse/storage/data_stores/main/room.py](synapse/storage/data_stores/main/room.py) in ` get_room_with_stats`.
It should return `None` if the room is unknown. But it returns an `IndexError`.
https://github.com/matrix-org/synapse/blob/901b1fa561e3cc661d78aa96d59802cf2078cb0d/synapse/storage/data_stores/main/room.py#L99-L105
Related to:
- #5575
- https://github.com/Awesome-Technologies/synapse-admin/issues/17
Signed-off-by: Dirk Klimpel dirk@klimpel.org
|
|\ |
|
| |\ |
|
| | | |
|
| | | |
|
|/ / |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes #2181.
The basic premise is that, when we
fail to reject an invite via the remote server, we can generate our own
out-of-band leave event and persist it as an outlier, so that we have something
to send to the client.
|
| | |
|
| |
| |
| |
| |
| | |
This table is no longer used, so we may as well stop populating it. Removing it
would prevent people rolling back to older releases of Synapse, so that can
happen in a future release.
|
| | |
|
| |
| |
| | |
The CI appears to use the latest version of isort, which is a problem when isort gets a major version bump. Rather than try to pin the version, I've done the necessary to make isort5 happy with synapse.
|
| |
| |
| | |
This makes it much easier to find where streams are referenced.
|
|/ |
|
| |
|
| |
|
|
|
|
|
| |
* Always return an unread_count in get_unread_event_push_actions_by_room_for_user
* Don't always expect unread_count to be there so we don't take out sync entirely if something goes wrong
|
|\
| |
| | |
Implementation of https://github.com/matrix-org/matrix-doc/pull/2625
|
| |\ |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is a potential solution to https://github.com/vector-im/riot-web/issues/3374
and https://github.com/vector-im/riot-web/issues/5953
as raised by Mozilla at https://github.com/vector-im/riot-web/issues/10868.
This lets you define a push rule action which increases the badge count (unread notification)
count on a given room, but doesn't actually send a push for that notification via email or HTTP.
We might want to define this as the default behaviour for group chats in future
to solve https://github.com/vector-im/riot-web/issues/3268 at last.
This is implemented as a string action rather than a tweak because:
* Other pushers don't care about the tweak, given they won't ever get pushed
* The DB can store the tweak more efficiently using the existing `notify` table.
* It avoids breaking the default_notif/highlight_action optimisations.
Clients which generate their own notifs (e.g. desktop notifs from Riot/Web
would need to be aware of the new push action) to uphold it.
An alternative way to do this would be to maintain a `msg_count` alongside
`highlight_count` and `notification_count` in `unread_notifications` in sync responses.
However, doing this by counting the rows in `events` since the `stream_position`
of the user's last read receipt turns out to be painfully slow (~200ms), perhaps
due to the size of the events table. So instead, we use the highly optimised
existing event_push_actions (and event_push_actions_staging) table to maintain
the counts - using the code paths which already exist for tracking unread
notification counts efficiently. These queries are typically ~3ms or so.
The biggest issues I see here are:
* We're slightly repurposing the `notif` field on `event_push_actions` to
track whether a given action actually sent a `push` or not. This doesn't
seem unreasonable, but it's slightly naughty given that previously the
field explicitly tracked whether `notify` was true for the action (and
as a result, it was uselessly always set to 1 in the DB).
* We're going to put more load on the `event_push_actions` table for all the
random group chats which people had previously muted. In practice i don't
think there are many of these though.
* There isn't an MSC for this yet (although this comment could become one).
|
| | | |
| | | |
| | | | |
The aim here is to make it easier to reason about when streams are limited and when they're not, by moving the logic into the database functions themselves. This should mean we can kill of `db_query_to_update_function` function.
|
| | | | |
|
| |_|/
|/| | |
|
| | | |
|
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Ensure account data stream IDs are unique.
The account data stream is shared between three tables, and the maximum
allocated ID was tracked in a dedicated table. Updating the max ID
happened outside the transaction that allocated the ID, leading to a
race where if the server was restarted then the same ID could be
allocated but the max ID failed to be updated, leading it to be reused.
The ID generators have support for tracking across multiple tables, so
we may as well use that instead of a dedicated table.
* Fix bug in account data replication stream.
If the same stream ID was used in both global and room account data then
the getting updates for the replication stream would fail due to
`heapq.merge(..)` trying to compare a `str` with a `None`. (This is
because you'd have two rows like `(534, '!room')` and `(534, None)` from
the room and global account data tables).
Fix is just to order by stream ID, since we don't rely on the ordering
beyond that. The bug where stream IDs can be reused should be fixed now,
so this case shouldn't happen going forward.
Fixes #7617
|
| |
| |
| |
| |
| | |
Based on #7619
async's `get_user_id_by_threepid` and its call stack.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The query keeps showing up in my slow query log.
This changes the plan under the top-level Sort node from
```
WindowAgg (cost=280335.88..292963.15 rows=561212 width=80) (actual time=138.651..160.562 rows=27112 loops=1)
-> Sort (cost=280335.88..281738.91 rows=561212 width=84) (actual time=138.597..140.622 rows=27112 loops=1)
Sort Key: state_groups_state.type, state_groups_state.state_key, state_groups_state.state_group
Sort Method: quicksort Memory: 4581kB
-> Nested Loop (cost=2.83..226745.22 rows=561212 width=84) (actual time=21.548..47.657 rows=27112 loops=1)
-> HashAggregate (cost=2.27..3.28 rows=101 width=8) (actual time=21.526..21.535 rows=20 loops=1)
Group Key: state.state_group
-> CTE Scan on state (cost=0.00..2.02 rows=101 width=8) (actual time=21.280..21.493 rows=20 loops=1)
-> Index Scan using state_groups_state_type_idx on state_groups_state (cost=0.56..2189.40 rows=5557 width=84) (actual time=0.005..0.991 rows=1356 loops=20)
Index Cond: (state_group = state.state_group)
```
to
```
Nested Loop (cost=2.83..226745.22 rows=561212 width=84) (actual time=24.194..52.834 rows=27112 loops=1)
-> HashAggregate (cost=2.27..3.28 rows=101 width=8) (actual time=24.130..24.138 rows=20 loops=1)
Group Key: state.state_group
-> CTE Scan on state (cost=0.00..2.02 rows=101 width=8) (actual time=23.887..24.113 rows=20 loops=1)
-> Index Scan using state_groups_state_type_idx on state_groups_state (cost=0.56..2189.40 rows=5557 width=84) (actual time=0.016..1.159 rows=1356 loops=20)
Index Cond: (state_group = state.state_group)
```
This cuts the execution time from ~190ms to ~130ms, i.e. a reduction
of ~30%.
The full plans are visualised at https://explain.depesz.com/s/WpbT and
https://explain.depesz.com/s/KlEk
Signed-off-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
|
| |
| |
| |
| |
| | |
Fixes #7469
Signed-off-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
|
| |
| |
| | |
We were using `logger` syntax which isn't supported by `Exception`s.
|
| |
| |
| |
| |
| |
| | |
The bg update never managed to complete, because it kept being interrupted by
transactions which want to take a lock.
Just doing it in the foreground isn't that bad, and is a good deal simpler.
|
| |
| |
| |
| |
| |
| | |
we can use `make_in_list_sql_clause` rather than doing our own half-baked
equivalent, which has the benefit of working just fine with empty lists.
(This has quite a lot of tests, so I think it's pretty safe)
|
| |
| |
| | |
These are surprisingly expensive, and we only really need to do them at startup.
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
The idea here is that if an instance persists an event via the replication HTTP API it can return before we receive that event over replication, which can lead to races where code assumes that persisting an event immediately updates various caches (e.g. current state of the room).
Most of Synapse doesn't hit such races, so we don't do the waiting automagically, instead we do so where necessary to avoid unnecessary delays. We may decide to change our minds here if it turns out there are a lot of subtle races going on.
People probably want to look at this commit by commit.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When a call to `user_device_resync` fails, we don't currently mark the remote user's device list as out of sync, nor do we retry to sync it.
https://github.com/matrix-org/synapse/pull/6776 introduced some code infrastructure to mark device lists as stale/out of sync.
This commit uses that code infrastructure to mark device lists as out of sync if processing an incoming device list update makes the device handler realise that the device list is out of sync, but we can't resync right now.
It also adds a looping call to retry all failed resync every 30s. This shouldn't cause too much spam in the logs as this commit also removes the "Failed to handle device list update for..." warning logs when catching `NotRetryingDestination`.
Fixes #7418
|
| |
| |
| |
| |
| | |
`_is_server_still_joined` will throw if it is given state updates with non-user ID state keys with local user leaves. This is actually rarely a problem since local leaves almost always get persisted by themselves.
(I discovered this on a branch that was otherwise broken, so I haven't seen this in the wild)
|
|\ \
| | |
| | | |
Kill off some old python 2 code
|
| | |
| | |
| | |
| | | |
this is no longer needed on python 3
|
| | |
| | |
| | |
| | | |
this is a no-op on python 3.
|
| | |
| | |
| | |
| | | |
this is a no-op on python 3.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Make sure that the AccountDataStream presents complete updates, in the right
order.
This is much the same fix as #7337 and #7358, but applied to a different stream.
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
This is required as both event persistence and the background update needs access to this function. It should be perfectly safe for two workers to write to that table at the same time.
|
| | |
| | |
| | |
| | | |
queries (#7465)
|
| | |
| | |
| | |
| | |
| | | |
This allows us to have the logic on both master and workers, which is necessary to move event persistence off master.
We also combine the instantiation of ID generators from DataStore and slave stores to the base worker stores. This allows us to select which process writes events independently of the master/worker splits.
|
|/ /
| |
| |
| | |
update_remote_profile_cache (#7511)
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Synapse 1.13.0rc2 (2020-05-14)
==============================
Bugfixes
--------
- Fix a long-standing bug which could cause messages not to be sent over federation, when state events with state keys matching user IDs (such as custom user statuses) were received. ([\#7376](https://github.com/matrix-org/synapse/issues/7376))
- Restore compatibility with non-compliant clients during the user interactive authentication process, fixing a problem introduced in v1.13.0rc1. ([\#7483](https://github.com/matrix-org/synapse/issues/7483))
Internal Changes
----------------
- Fix linting errors in new version of Flake8. ([\#7470](https://github.com/matrix-org/synapse/issues/7470))
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fix a bug where the `get_joined_users` cache could be corrupted by custom
status events (or other state events with a state_key matching the user ID).
The bug was introduced by #2229, but has largely gone unnoticed since then.
Fixes #7099, #7373.
|
| | |
| | |
| | |
| | | |
This is a cherry-pick of 1a1da60ad2c9172fe487cd38a164b39df60f4cb5 (#7470)
to the release-v1.13.0 branch.
|
| | |
| | |
| | | |
This is safe as we can now write to cache invalidation stream on workers, and is required for when we move event persistence off master.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The aim here is to get to a stage where we have a `PersistEventStore` that holds all the write methods used during event persistence, so that we can take that class out of the `DataStore` mixin and instansiate it separately. This will allow us to instansiate it on processes other than master, while also ensuring it is only available on processes that are configured to write to events stream.
This is a bit of an architectural change, where we end up with multiple classes per data store (rather than one per data store we have now). We end up having:
1. Storage classes that provide high level APIs that can talk to multiple data stores.
2. Data store modules that consist of classes that must point at the same database instance.
3. Classes in a data store that can be instantiated on processes depending on config.
|
| | | |
|
| | |
| | |
| | |
| | | |
variables (#6391)
|
| | | |
|
|\| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* release-v1.13.0:
Don't UPGRADE database rows
RST indenting
Put rollback instructions in upgrade notes
Fix changelog typo
Oh yeah, RST
Absolute URL it is then
Fix upgrade notes link
Provide summary of upgrade issues in changelog. Fix )
Move next version notes from changelog to upgrade notes
Changelog fixes
1.13.0rc1
Documentation on setting up redis (#7446)
Rework UI Auth session validation for registration (#7455)
Fix errors from malformed log line (#7454)
Drop support for redis.dbid (#7450)
|
| | |
| | |
| | |
| | | |
Be less strict about validation of UI authentication sessions during
registration to match client expecations.
|
| | | |
|
| | | |
|
|\| | |
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Currently we copy `users_who_share_room` needlessly about three times,
which is expensive when the set is large (which it can easily be).
|
|\ \ \ \
| | | | |
| | | | | |
Make get_e2e_cross_signing_key delegate to get_e2e_cross_signing_keys_bulk
|
| | | | |
| | | | |
| | | | |
| | | | | |
... mostly because the latter has a cache.
|
| | |/ /
| |/| |
| | | |
| | | |
| | | | |
There's no point carefully dividing a list into batches, and then completely
ignoring the batches.
|
|\ \ \ \ |
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | | |
This will be used to coordinate stream IDs across multiple writers.
Functions as the equivalent of both `StreamIdGenerator` and
`SlavedIdTracker`.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
(#7387)
populate_stats_process_rooms was added in #5971 / v1.4.0; current_state_events_membership was added in #5706 / v1.3.0.
Fixes #7380.
|
| |/ / |
|
| | | |
|
|/ /
| |
| |
| | |
most of these params don't really need to be lists.
|
| |
| |
| |
| |
| | |
By persisting the user interactive authentication sessions to the database, this fixes
situations where a user hits different works throughout their auth session and also
allows sessions to persist through restarts of Synapse.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Factor out functions for injecting events into database
I want to add some more flexibility to the tools for injecting events into the
database, and I don't want to clutter up HomeserverTestCase with them, so let's
factor them out to a new file.
* Rework TestReplicationDataHandler
This wasn't very easy to work with: the mock wrapping was largely superfluous,
and it's useful to be able to inspect the received rows, and clear out the
received list.
* Fix AssertionErrors being thrown by EventsStream
Part of the problem was that there was an off-by-one error in the assertion,
but also the limit logic was too simple. Fix it all up and add some tests.
|
| |
| |
| |
| |
| | |
(#6881)
Signed-off-by: Manuel Stahl <manuel.stahl@awesome-technologies.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Figuring out how to correctly limit updates from this stream without dropping
entries is far more complicated than just counting the number of rows being
returned. We need to consider each query separately and, if any one query hits
the limit, truncate the results from the others.
I think this also fixes some potentially long-standing bugs where events or
state changes could get missed if we hit the limit on either query.
|
| | |
|
| |
| |
| |
| |
| | |
We could end up looking up tens of thousands of events, which could cause large
amounts of data to be logged to the postgres log.
|
| | |
|
| |
| |
| |
| | |
We seem to have some duplicates, which could do with being cleared out.
|
| | |
|
| |
| |
| | |
They just get in the way.
|
| |
| |
| |
| | |
Fixes a race between handling `POSITION` and `RDATA` commands. We do this by simply linearizing handling of them.
|
| | |
|
|\ \
| | |
| | | |
Only run one background update at a time
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
returning a None or an int that we don't use is confusing.
|
| | |
| | |
| | |
| | | |
(Almost) everywhere that uses it is happy with an awaitable.
|
| | |
| | |
| | |
| | | |
This was only used in a unit test, so let's just inline it in the test.
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Occasionally we could get a federation device list update transaction which
looked like:
```
[
{'edu_type': 'm.device_list_update', 'content': {'user_id': '@user:test', 'device_id': 'D2', 'prev_id': [], 'stream_id': 12, 'deleted': True}},
{'edu_type': 'm.device_list_update', 'content': {'user_id': '@user:test', 'device_id': 'D1', 'prev_id': [12], 'stream_id': 11, 'deleted': True}},
{'edu_type': 'm.device_list_update', 'content': {'user_id': '@user:test', 'device_id': 'D3', 'prev_id': [11], 'stream_id': 13, 'deleted': True}}
]
```
Having `stream_ids` which are lower than `prev_ids` looks odd. It might work
(I'm not actually sure), but in any case it doesn't seem like a reasonable
thing to expect other implementations to support.
|
| |
| |
| | |
Signed-off-by: Karl Linderhed <git@karlinde.se>
|
| | |
|
| | |
|
| |
| |
| |
| | |
make sure we clear out all but one update for the user
|
| | |
|
| | |
|
| |
| |
| |
| | |
Fixes: #7127
Signed-off-by: David Vo <david@vovo.id.au>
|
| |
| |
| | |
This changes the replication protocol so that the server does not send down `RDATA` for rows that happened before the client connected. Instead, the server will send a `POSITION` and clients then query the database (or master out of band) to get up to date.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Pull Sentinel out of LoggingContext
... and drop a few unnecessary references to it
* Factor out LoggingContext.current_context
move `current_context` and `set_context` out to top-level functions.
Mostly this means that I can more easily trace what's actually referring to
LoggingContext, but I think it's generally neater.
* move copy-to-parent into `stop`
this really just makes `start` and `stop` more symetric. It also means that it
behaves correctly if you manually `set_log_context` rather than using the
context manager.
* Replace `LoggingContext.alive` with `finished`
Turn `alive` into `finished` and make it a bit better defined.
|
| | |
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* Add 'device_lists_outbound_pokes' as extra table.
This makes sure we check all the relevant tables to get the current max
stream ID.
Currently not doing so isn't problematic as the max stream ID in
`device_lists_outbound_pokes` is the same as in `device_lists_stream`,
however that will change.
* Change device lists stream to have one row per id.
This will make it possible to process the streams more incrementally,
avoiding having to process large chunks at once.
* Change device list replication to match new semantics.
Instead of sending down batches of user ID/host tuples, send down a row
per entity (user ID or host).
* Newsfile
* Remove handling of multiple rows per ID
* Fix worker handling
* Comments from review
|
| | | |
|
| |\ \
| | | |
| | | |
| | | | |
erikj/fixup_devices_stream
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Instead of sending down batches of user ID/host tuples, send down a row
per entity (user ID or host).
|
| | | |
| | | |
| | | |
| | | |
| | | | |
This will make it possible to process the streams more incrementally,
avoiding having to process large chunks at once.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This makes sure we check all the relevant tables to get the current max
stream ID.
Currently not doing so isn't problematic as the max stream ID in
`device_lists_outbound_pokes` is the same as in `device_lists_stream`,
however that will change.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It was originally implemented by pulling the full auth chain of all
state sets out of the database and doing set comparison. However, that
can take a lot work if the state and auth chains are large.
Instead, lets try and fetch the auth chains at the same time and
calculate the difference on the fly, allowing us to bail early if all
the auth chains converge. Assuming that the auth chains do converge more
often than not, this should improve performance. Hopefully.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Fixes #7065
This is basically the same as https://github.com/matrix-org/synapse/pull/6847 except it tries to populate events from `state_events` rather than `current_state_events`, since the latter might have been cleared from the state of some rooms too early, leaving them with a `NULL` room version.
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | |
| | | | |
| | | | | |
matrix-org/babolivier/get_time_of_last_push_action_before
Move get_time_of_last_push_action_before to the EventPushActionsWorkerStore
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Fixes #7054
I also had a look at the rest of the functions in
`EventPushActionsStore` and in the push notifications send code and it
looks to me like there shouldn't be any other method with this issue in
this part of the codebase.
|
| | | | |
| | | | |
| | | | |
| | | | | |
room ver. (#7037)
|
|/ / / /
| | | |
| | | |
| | | |
| | | | |
* Break down monthly active users by appservice_id and emit via prometheus.
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
|
| |/ /
|/| |
| | |
| | |
| | | |
This is a precursor to giving EventBase objects the knowledge of which room version they belong to.
|
|/ /
| |
| |
| |
| | |
This currently causes presence notify code to log exceptions when there
is no state changes to process. This doesn't actually cause any problems
as we'd simply do nothing anyway.
|
| |
| |
| | |
Fix #6910
|
| |
| |
| |
| |
| | |
I cracked, and added some type definitions in synapse.storage.
|
| |
| |
| |
| |
| | |
When we get an invite over federation, store the room version in the rooms table.
The general idea here is that, when we pull the invite out again, we'll want to know what room_version it belongs to (so that we can later redact it if need be). So we need to store it somewhere...
|
| |
| |
| | |
Signed-off-by: Uday Bansal <43824981+udaybansal19@users.noreply.github.com>
|
| |
| |
| |
| |
| |
| | |
Some of the database deltas rely on `config.server_name` being set correctly,
so we should check that it is before running the deltas.
Fixes #6870.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
This is intended as a precursor to storing room versions when we receive an
invite over federation, but has the happy side-effect of fixing #3374 at last.
In short: change the store_room with try/except to a proper upsert which
updates the right columns.
|
| |
| |
| |
| | |
Ensure good comprehension hygiene using flake8-comprehensions.
|
| | |
|
| |
| |
| | |
Signed-off-by: Ruben Barkow-Kuder <github@r.z11.de>
|
| | |
|
| |
| |
| |
| |
| |
| | |
The state res v2 algorithm only cares about the difference between auth
chains, so we can pass in the known common state to the `get_auth_chain`
storage function so that it can ignore those events.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Increase DB/CPU perf of `_is_server_still_joined` check.
For rooms with large amount of state a single user leaving could cause
us to go and load a lot of membership events and then pull out
membership state in a large number of batches.
* Newsfile
* Update synapse/storage/persist_events.py
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
* Fix adding if too soon
* Update docstring
* Review comments
* Woops typo
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
|
| | |
|
| |
| |
| | |
We do this by moving the recursive query to be fully in the DB.
|
| |
| |
| |
| | |
delete_old_current_state_events (#6924)
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
A lot of the things we log at INFO are now a bit superfluous, so lets
make them DEBUG logs to reduce the amount we log by default.
Co-Authored-By: Brendan Abolivier <babolivier@matrix.org>
Co-authored-by: Brendan Abolivier <github@brendanabolivier.com>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Synapse 1.10.0rc2 (2020-02-06)
==============================
Bugfixes
--------
- Fix an issue with cross-signing where device signatures were not sent to remote servers. ([\#6844](https://github.com/matrix-org/synapse/issues/6844))
- Fix to the unknown remote device detection which was introduced in 1.10.rc1. ([\#6848](https://github.com/matrix-org/synapse/issues/6848))
Internal Changes
----------------
- Detect unexpected sender keys on remote encrypted events and resync device lists. ([\#6850](https://github.com/matrix-org/synapse/issues/6850))
|
| | |
| | |
| | | |
add device signatures to device key query results
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We were looking at the wrong event type (`m.room.encryption` vs
`m.room.encrypted`).
Also fixup the duplicate `EvenTypes` entries.
Introduced in #6776.
|
| | |
| | |
| | |
| | |
| | | |
* Reduce tnx performance logging to DEBUG
* Changelog.d
|
| | |
| | |
| | | |
We're going to need this so that we can figure out how to handle redactions when fetching events from the database.
|
|/ / |
|
| |
| |
| |
| | |
We were in fact only deleting stale marker when we got an incremental
update, rather than when we did a full resync.
|
| |
| |
| |
| | |
So that we can start factoring out some of this boilerplatey boilerplate.
|
| |
| |
| |
| |
| | |
... to make way for a forthcoming get_room_version which returns a RoomVersion
object.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When a server leaves a room it may stop sharing a room with remote
users, and thus not get any updates to their device lists. So we need to
check for this case and delete those device lists from the cache.
We don't need to do this if we stop sharing a room because the remote
user leaves the room, because we track that case via looking at
membership changes.
|
| |
| |
| |
| |
| |
| | |
If we detect that the remote users' keys may have changed then we should
attempt to resync against the remote server rather than using the
(potentially) stale local cache.
|
| |
| |
| |
| |
| |
| | |
Otherwise its just stale data, which may get deleted later anyway so
can't be relied on. It's also a bit of a shotgun if we're trying to get
the current state of a room we're not in.
|
| |
| |
| |
| | |
We just mark the fact that the cache may be stale in the database for
now.
|
| |
| |
| | |
As using non-C locale can cause issues on upgrading OS.
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | | |
Calling the invalidation function during initialisation of the data
stores introduces a circular dependency, causing Synapse to fail to
start.
|
| | |
| | |
| | | |
This is so that we don't have to rely on pulling it out from `current_state_events` table.
|
| | |
| | |
| | | |
Currently if a worker invalidates a cache it will be streamed to master, which then didn't forward those to other workers.
|
| | |
| | |
| | |
| | |
| | |
| | | |
There are quite a few places that we assume that a redaction event has a
corresponding `redacts` key, which is not always the case. So lets
cheekily make it so that event.redacts just returns None instead.
|
|/ / |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Fix #6727
Related #6655
Co-authored-by: Erik Johnston <erikj@jki.re>
|
|\ \
| | |
| | | |
Fix instantiation of message retention purge jobs
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When figuring out which topological token to start a purge job at, we
need to do the following:
1. Figure out a timestamp before which events will be purged
2. Select the first stream ordering after that timestamp
3. Select info about the first event after that stream ordering
4. Build a topological token from that info
In some situations (e.g. quiet rooms with a short max_lifetime), there
might not be an event after the stream ordering at step 3, therefore we
abort the purge with the error `No event found`. To mitigate that, this
patch fetches the first event _before_ the stream ordering, instead of
after.
|
| | | |
|
| | | |
|
| | | |
|
|/ /
| |
| |
| |
| |
| |
| | |
Currently we rely on `current_state_events` to figure out what rooms a
user was in and their last membership event in there. However, if the
server leaves the room then the table may be cleaned up and that
information is lost. So lets add a table that separately holds that
information.
|
| | |
|