| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Ensure account data stream IDs are unique.
The account data stream is shared between three tables, and the maximum
allocated ID was tracked in a dedicated table. Updating the max ID
happened outside the transaction that allocated the ID, leading to a
race where if the server was restarted then the same ID could be
allocated but the max ID failed to be updated, leading it to be reused.
The ID generators have support for tracking across multiple tables, so
we may as well use that instead of a dedicated table.
* Fix bug in account data replication stream.
If the same stream ID was used in both global and room account data then
the getting updates for the replication stream would fail due to
`heapq.merge(..)` trying to compare a `str` with a `None`. (This is
because you'd have two rows like `(534, '!room')` and `(534, None)` from
the room and global account data tables).
Fix is just to order by stream ID, since we don't rely on the ordering
beyond that. The bug where stream IDs can be reused should be fixed now,
so this case shouldn't happen going forward.
Fixes #7617
|
|
|
|
|
| |
Based on #7619
async's `get_user_id_by_threepid` and its call stack.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The query keeps showing up in my slow query log.
This changes the plan under the top-level Sort node from
```
WindowAgg (cost=280335.88..292963.15 rows=561212 width=80) (actual time=138.651..160.562 rows=27112 loops=1)
-> Sort (cost=280335.88..281738.91 rows=561212 width=84) (actual time=138.597..140.622 rows=27112 loops=1)
Sort Key: state_groups_state.type, state_groups_state.state_key, state_groups_state.state_group
Sort Method: quicksort Memory: 4581kB
-> Nested Loop (cost=2.83..226745.22 rows=561212 width=84) (actual time=21.548..47.657 rows=27112 loops=1)
-> HashAggregate (cost=2.27..3.28 rows=101 width=8) (actual time=21.526..21.535 rows=20 loops=1)
Group Key: state.state_group
-> CTE Scan on state (cost=0.00..2.02 rows=101 width=8) (actual time=21.280..21.493 rows=20 loops=1)
-> Index Scan using state_groups_state_type_idx on state_groups_state (cost=0.56..2189.40 rows=5557 width=84) (actual time=0.005..0.991 rows=1356 loops=20)
Index Cond: (state_group = state.state_group)
```
to
```
Nested Loop (cost=2.83..226745.22 rows=561212 width=84) (actual time=24.194..52.834 rows=27112 loops=1)
-> HashAggregate (cost=2.27..3.28 rows=101 width=8) (actual time=24.130..24.138 rows=20 loops=1)
Group Key: state.state_group
-> CTE Scan on state (cost=0.00..2.02 rows=101 width=8) (actual time=23.887..24.113 rows=20 loops=1)
-> Index Scan using state_groups_state_type_idx on state_groups_state (cost=0.56..2189.40 rows=5557 width=84) (actual time=0.016..1.159 rows=1356 loops=20)
Index Cond: (state_group = state.state_group)
```
This cuts the execution time from ~190ms to ~130ms, i.e. a reduction
of ~30%.
The full plans are visualised at https://explain.depesz.com/s/WpbT and
https://explain.depesz.com/s/KlEk
Signed-off-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
|
|
|
|
|
| |
Fixes #7469
Signed-off-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
|
|
|
| |
We were using `logger` syntax which isn't supported by `Exception`s.
|
|
|
|
|
|
| |
The bg update never managed to complete, because it kept being interrupted by
transactions which want to take a lock.
Just doing it in the foreground isn't that bad, and is a good deal simpler.
|
|
|
|
|
|
| |
we can use `make_in_list_sql_clause` rather than doing our own half-baked
equivalent, which has the benefit of working just fine with empty lists.
(This has quite a lot of tests, so I think it's pretty safe)
|
|
|
| |
These are surprisingly expensive, and we only really need to do them at startup.
|
| |
|
|
|
|
|
|
|
| |
The idea here is that if an instance persists an event via the replication HTTP API it can return before we receive that event over replication, which can lead to races where code assumes that persisting an event immediately updates various caches (e.g. current state of the room).
Most of Synapse doesn't hit such races, so we don't do the waiting automagically, instead we do so where necessary to avoid unnecessary delays. We may decide to change our minds here if it turns out there are a lot of subtle races going on.
People probably want to look at this commit by commit.
|
|
|
|
|
|
|
|
|
|
|
| |
When a call to `user_device_resync` fails, we don't currently mark the remote user's device list as out of sync, nor do we retry to sync it.
https://github.com/matrix-org/synapse/pull/6776 introduced some code infrastructure to mark device lists as stale/out of sync.
This commit uses that code infrastructure to mark device lists as out of sync if processing an incoming device list update makes the device handler realise that the device list is out of sync, but we can't resync right now.
It also adds a looping call to retry all failed resync every 30s. This shouldn't cause too much spam in the logs as this commit also removes the "Failed to handle device list update for..." warning logs when catching `NotRetryingDestination`.
Fixes #7418
|
|
|
|
|
| |
`_is_server_still_joined` will throw if it is given state updates with non-user ID state keys with local user leaves. This is actually rarely a problem since local leaves almost always get persisted by themselves.
(I discovered this on a branch that was otherwise broken, so I haven't seen this in the wild)
|
|\
| |
| | |
Kill off some old python 2 code
|
| |
| |
| |
| | |
this is no longer needed on python 3
|
| |
| |
| |
| | |
this is a no-op on python 3.
|
| |
| |
| |
| | |
this is a no-op on python 3.
|
| |
| |
| |
| |
| |
| | |
Make sure that the AccountDataStream presents complete updates, in the right
order.
This is much the same fix as #7337 and #7358, but applied to a different stream.
|
| | |
|
| |
| |
| |
| |
| | |
This is required as both event persistence and the background update needs access to this function. It should be perfectly safe for two workers to write to that table at the same time.
|
| |
| |
| |
| | |
queries (#7465)
|
| |
| |
| |
| |
| | |
This allows us to have the logic on both master and workers, which is necessary to move event persistence off master.
We also combine the instantiation of ID generators from DataStore and slave stores to the base worker stores. This allows us to select which process writes events independently of the master/worker splits.
|
|/
|
|
| |
update_remote_profile_cache (#7511)
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.13.0rc2 (2020-05-14)
==============================
Bugfixes
--------
- Fix a long-standing bug which could cause messages not to be sent over federation, when state events with state keys matching user IDs (such as custom user statuses) were received. ([\#7376](https://github.com/matrix-org/synapse/issues/7376))
- Restore compatibility with non-compliant clients during the user interactive authentication process, fixing a problem introduced in v1.13.0rc1. ([\#7483](https://github.com/matrix-org/synapse/issues/7483))
Internal Changes
----------------
- Fix linting errors in new version of Flake8. ([\#7470](https://github.com/matrix-org/synapse/issues/7470))
|
| |
| |
| |
| |
| |
| |
| |
| | |
Fix a bug where the `get_joined_users` cache could be corrupted by custom
status events (or other state events with a state_key matching the user ID).
The bug was introduced by #2229, but has largely gone unnoticed since then.
Fixes #7099, #7373.
|
| |
| |
| |
| | |
This is a cherry-pick of 1a1da60ad2c9172fe487cd38a164b39df60f4cb5 (#7470)
to the release-v1.13.0 branch.
|
| |
| |
| | |
This is safe as we can now write to cache invalidation stream on workers, and is required for when we move event persistence off master.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The aim here is to get to a stage where we have a `PersistEventStore` that holds all the write methods used during event persistence, so that we can take that class out of the `DataStore` mixin and instansiate it separately. This will allow us to instansiate it on processes other than master, while also ensuring it is only available on processes that are configured to write to events stream.
This is a bit of an architectural change, where we end up with multiple classes per data store (rather than one per data store we have now). We end up having:
1. Storage classes that provide high level APIs that can talk to multiple data stores.
2. Data store modules that consist of classes that must point at the same database instance.
3. Classes in a data store that can be instantiated on processes depending on config.
|
| | |
|
| |
| |
| |
| | |
variables (#6391)
|
| | |
|
|\|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* release-v1.13.0:
Don't UPGRADE database rows
RST indenting
Put rollback instructions in upgrade notes
Fix changelog typo
Oh yeah, RST
Absolute URL it is then
Fix upgrade notes link
Provide summary of upgrade issues in changelog. Fix )
Move next version notes from changelog to upgrade notes
Changelog fixes
1.13.0rc1
Documentation on setting up redis (#7446)
Rework UI Auth session validation for registration (#7455)
Fix errors from malformed log line (#7454)
Drop support for redis.dbid (#7450)
|
| |
| |
| |
| | |
Be less strict about validation of UI authentication sessions during
registration to match client expecations.
|
| | |
|
| | |
|
|\| |
|
| |\ |
|
| | |
| | |
| | |
| | |
| | | |
Currently we copy `users_who_share_room` needlessly about three times,
which is expensive when the set is large (which it can easily be).
|
|\ \ \
| | | |
| | | | |
Make get_e2e_cross_signing_key delegate to get_e2e_cross_signing_keys_bulk
|
| | | |
| | | |
| | | |
| | | | |
... mostly because the latter has a cache.
|
| | |/
| |/|
| | |
| | |
| | | |
There's no point carefully dividing a list into batches, and then completely
ignoring the batches.
|
|\ \ \ |
|
| |/ /
| | |
| | |
| | |
| | |
| | | |
This will be used to coordinate stream IDs across multiple writers.
Functions as the equivalent of both `StreamIdGenerator` and
`SlavedIdTracker`.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
(#7387)
populate_stats_process_rooms was added in #5971 / v1.4.0; current_state_events_membership was added in #5706 / v1.3.0.
Fixes #7380.
|
| |/ |
|
| | |
|
|/
|
|
| |
most of these params don't really need to be lists.
|
|
|
|
|
| |
By persisting the user interactive authentication sessions to the database, this fixes
situations where a user hits different works throughout their auth session and also
allows sessions to persist through restarts of Synapse.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Factor out functions for injecting events into database
I want to add some more flexibility to the tools for injecting events into the
database, and I don't want to clutter up HomeserverTestCase with them, so let's
factor them out to a new file.
* Rework TestReplicationDataHandler
This wasn't very easy to work with: the mock wrapping was largely superfluous,
and it's useful to be able to inspect the received rows, and clear out the
received list.
* Fix AssertionErrors being thrown by EventsStream
Part of the problem was that there was an off-by-one error in the assertion,
but also the limit logic was too simple. Fix it all up and add some tests.
|
|
|
|
|
| |
(#6881)
Signed-off-by: Manuel Stahl <manuel.stahl@awesome-technologies.de>
|
|
|
|
|
|
|
|
|
|
| |
Figuring out how to correctly limit updates from this stream without dropping
entries is far more complicated than just counting the number of rows being
returned. We need to consider each query separately and, if any one query hits
the limit, truncate the results from the others.
I think this also fixes some potentially long-standing bugs where events or
state changes could get missed if we hit the limit on either query.
|
| |
|
|
|
|
|
| |
We could end up looking up tens of thousands of events, which could cause large
amounts of data to be logged to the postgres log.
|
| |
|
|
|
|
| |
We seem to have some duplicates, which could do with being cleared out.
|
| |
|
|
|
| |
They just get in the way.
|
|
|
|
| |
Fixes a race between handling `POSITION` and `RDATA` commands. We do this by simply linearizing handling of them.
|
| |
|
|\
| |
| | |
Only run one background update at a time
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
returning a None or an int that we don't use is confusing.
|
| |
| |
| |
| | |
(Almost) everywhere that uses it is happy with an awaitable.
|
| |
| |
| |
| | |
This was only used in a unit test, so let's just inline it in the test.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Occasionally we could get a federation device list update transaction which
looked like:
```
[
{'edu_type': 'm.device_list_update', 'content': {'user_id': '@user:test', 'device_id': 'D2', 'prev_id': [], 'stream_id': 12, 'deleted': True}},
{'edu_type': 'm.device_list_update', 'content': {'user_id': '@user:test', 'device_id': 'D1', 'prev_id': [12], 'stream_id': 11, 'deleted': True}},
{'edu_type': 'm.device_list_update', 'content': {'user_id': '@user:test', 'device_id': 'D3', 'prev_id': [11], 'stream_id': 13, 'deleted': True}}
]
```
Having `stream_ids` which are lower than `prev_ids` looks odd. It might work
(I'm not actually sure), but in any case it doesn't seem like a reasonable
thing to expect other implementations to support.
|
|
|
| |
Signed-off-by: Karl Linderhed <git@karlinde.se>
|
| |
|
| |
|
|
|
|
| |
make sure we clear out all but one update for the user
|
| |
|
| |
|
|
|
|
| |
Fixes: #7127
Signed-off-by: David Vo <david@vovo.id.au>
|
|
|
| |
This changes the replication protocol so that the server does not send down `RDATA` for rows that happened before the client connected. Instead, the server will send a `POSITION` and clients then query the database (or master out of band) to get up to date.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Pull Sentinel out of LoggingContext
... and drop a few unnecessary references to it
* Factor out LoggingContext.current_context
move `current_context` and `set_context` out to top-level functions.
Mostly this means that I can more easily trace what's actually referring to
LoggingContext, but I think it's generally neater.
* move copy-to-parent into `stop`
this really just makes `start` and `stop` more symetric. It also means that it
behaves correctly if you manually `set_log_context` rather than using the
context manager.
* Replace `LoggingContext.alive` with `finished`
Turn `alive` into `finished` and make it a bit better defined.
|
| |
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Add 'device_lists_outbound_pokes' as extra table.
This makes sure we check all the relevant tables to get the current max
stream ID.
Currently not doing so isn't problematic as the max stream ID in
`device_lists_outbound_pokes` is the same as in `device_lists_stream`,
however that will change.
* Change device lists stream to have one row per id.
This will make it possible to process the streams more incrementally,
avoiding having to process large chunks at once.
* Change device list replication to match new semantics.
Instead of sending down batches of user ID/host tuples, send down a row
per entity (user ID or host).
* Newsfile
* Remove handling of multiple rows per ID
* Fix worker handling
* Comments from review
|
| | |
|
| |\
| | |
| | |
| | | |
erikj/fixup_devices_stream
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Instead of sending down batches of user ID/host tuples, send down a row
per entity (user ID or host).
|
| | |
| | |
| | |
| | |
| | | |
This will make it possible to process the streams more incrementally,
avoiding having to process large chunks at once.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This makes sure we check all the relevant tables to get the current max
stream ID.
Currently not doing so isn't problematic as the max stream ID in
`device_lists_outbound_pokes` is the same as in `device_lists_stream`,
however that will change.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
It was originally implemented by pulling the full auth chain of all
state sets out of the database and doing set comparison. However, that
can take a lot work if the state and auth chains are large.
Instead, lets try and fetch the auth chains at the same time and
calculate the difference on the fly, allowing us to bail early if all
the auth chains converge. Assuming that the auth chains do converge more
often than not, this should improve performance. Hopefully.
|
| | |
| | |
| | |
| | |
| | | |
Fixes #7065
This is basically the same as https://github.com/matrix-org/synapse/pull/6847 except it tries to populate events from `state_events` rather than `current_state_events`, since the latter might have been cleared from the state of some rooms too early, leaving them with a `NULL` room version.
|
| | | |
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
matrix-org/babolivier/get_time_of_last_push_action_before
Move get_time_of_last_push_action_before to the EventPushActionsWorkerStore
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Fixes #7054
I also had a look at the rest of the functions in
`EventPushActionsStore` and in the push notifications send code and it
looks to me like there shouldn't be any other method with this issue in
this part of the codebase.
|
| | | |
| | | |
| | | |
| | | | |
room ver. (#7037)
|
|/ / /
| | |
| | |
| | |
| | | |
* Break down monthly active users by appservice_id and emit via prometheus.
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
|
| |/
|/|
| |
| |
| | |
This is a precursor to giving EventBase objects the knowledge of which room version they belong to.
|
|/
|
|
|
| |
This currently causes presence notify code to log exceptions when there
is no state changes to process. This doesn't actually cause any problems
as we'd simply do nothing anyway.
|
|
|
| |
Fix #6910
|
|
|
|
|
| |
I cracked, and added some type definitions in synapse.storage.
|
|
|
|
|
| |
When we get an invite over federation, store the room version in the rooms table.
The general idea here is that, when we pull the invite out again, we'll want to know what room_version it belongs to (so that we can later redact it if need be). So we need to store it somewhere...
|
|
|
| |
Signed-off-by: Uday Bansal <43824981+udaybansal19@users.noreply.github.com>
|
|
|
|
|
|
| |
Some of the database deltas rely on `config.server_name` being set correctly,
so we should check that it is before running the deltas.
Fixes #6870.
|
| |
|
|
|
|
|
|
|
|
| |
This is intended as a precursor to storing room versions when we receive an
invite over federation, but has the happy side-effect of fixing #3374 at last.
In short: change the store_room with try/except to a proper upsert which
updates the right columns.
|
|
|
|
| |
Ensure good comprehension hygiene using flake8-comprehensions.
|
| |
|
|
|
| |
Signed-off-by: Ruben Barkow-Kuder <github@r.z11.de>
|
| |
|
|
|
|
|
|
| |
The state res v2 algorithm only cares about the difference between auth
chains, so we can pass in the known common state to the `get_auth_chain`
storage function so that it can ignore those events.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Increase DB/CPU perf of `_is_server_still_joined` check.
For rooms with large amount of state a single user leaving could cause
us to go and load a lot of membership events and then pull out
membership state in a large number of batches.
* Newsfile
* Update synapse/storage/persist_events.py
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
* Fix adding if too soon
* Update docstring
* Review comments
* Woops typo
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
|
| |
|
|
|
| |
We do this by moving the recursive query to be fully in the DB.
|
|
|
|
| |
delete_old_current_state_events (#6924)
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
A lot of the things we log at INFO are now a bit superfluous, so lets
make them DEBUG logs to reduce the amount we log by default.
Co-Authored-By: Brendan Abolivier <babolivier@matrix.org>
Co-authored-by: Brendan Abolivier <github@brendanabolivier.com>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.10.0rc2 (2020-02-06)
==============================
Bugfixes
--------
- Fix an issue with cross-signing where device signatures were not sent to remote servers. ([\#6844](https://github.com/matrix-org/synapse/issues/6844))
- Fix to the unknown remote device detection which was introduced in 1.10.rc1. ([\#6848](https://github.com/matrix-org/synapse/issues/6848))
Internal Changes
----------------
- Detect unexpected sender keys on remote encrypted events and resync device lists. ([\#6850](https://github.com/matrix-org/synapse/issues/6850))
|
| |
| |
| | |
add device signatures to device key query results
|
| |
| |
| |
| |
| |
| |
| |
| | |
We were looking at the wrong event type (`m.room.encryption` vs
`m.room.encrypted`).
Also fixup the duplicate `EvenTypes` entries.
Introduced in #6776.
|
| |
| |
| |
| |
| | |
* Reduce tnx performance logging to DEBUG
* Changelog.d
|
| |
| |
| | |
We're going to need this so that we can figure out how to handle redactions when fetching events from the database.
|
|/ |
|
|
|
|
| |
We were in fact only deleting stale marker when we got an incremental
update, rather than when we did a full resync.
|
|
|
|
| |
So that we can start factoring out some of this boilerplatey boilerplate.
|
|
|
|
|
| |
... to make way for a forthcoming get_room_version which returns a RoomVersion
object.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
When a server leaves a room it may stop sharing a room with remote
users, and thus not get any updates to their device lists. So we need to
check for this case and delete those device lists from the cache.
We don't need to do this if we stop sharing a room because the remote
user leaves the room, because we track that case via looking at
membership changes.
|
|
|
|
|
|
| |
If we detect that the remote users' keys may have changed then we should
attempt to resync against the remote server rather than using the
(potentially) stale local cache.
|
|
|
|
|
|
| |
Otherwise its just stale data, which may get deleted later anyway so
can't be relied on. It's also a bit of a shotgun if we're trying to get
the current state of a room we're not in.
|
|
|
|
| |
We just mark the fact that the cache may be stale in the database for
now.
|
|
|
| |
As using non-C locale can cause issues on upgrading OS.
|
|\ |
|
| |
| |
| |
| |
| | |
Calling the invalidation function during initialisation of the data
stores introduces a circular dependency, causing Synapse to fail to
start.
|
| |
| |
| | |
This is so that we don't have to rely on pulling it out from `current_state_events` table.
|
| |
| |
| | |
Currently if a worker invalidates a cache it will be streamed to master, which then didn't forward those to other workers.
|
| |
| |
| |
| |
| |
| | |
There are quite a few places that we assume that a redaction event has a
corresponding `redacts` key, which is not always the case. So lets
cheekily make it so that event.redacts just returns None instead.
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Fix #6727
Related #6655
Co-authored-by: Erik Johnston <erikj@jki.re>
|
|\
| |
| | |
Fix instantiation of message retention purge jobs
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When figuring out which topological token to start a purge job at, we
need to do the following:
1. Figure out a timestamp before which events will be purged
2. Select the first stream ordering after that timestamp
3. Select info about the first event after that stream ordering
4. Build a topological token from that info
In some situations (e.g. quiet rooms with a short max_lifetime), there
might not be an event after the stream ordering at step 3, therefore we
abort the purge with the error `No event found`. To mitigate that, this
patch fetches the first event _before_ the stream ordering, instead of
after.
|
| | |
|
| | |
|
| | |
|
|/
|
|
|
|
|
| |
Currently we rely on `current_state_events` to figure out what rooms a
user was in and their last membership event in there. However, if the
server leaves the room then the table may be cleaned up and that
information is lost. So lets add a table that separately holds that
information.
|
| |
|
| |
|
|
|
|
|
| |
this saves doing it on each connection, and will allow us to pass extra options
in.
|
|
|
|
| |
We might not need the cursor at all.
|
|
|
| |
Signed-off-by: Manuel Stahl <manuel.stahl@awesome-technologies.de>
|
|\
| |
| | |
Fix media repo admin APIs when using a media worker.
|
| | |
|
| | |
|
| | |
|
| | |
|
|/
|
|
| |
Fixes #6552
|
|\
| |
| | |
Fix conditions failing if min_depth = 0
|
| |
| |
| |
| | |
This could result in Synapse not fetching prev_events for new events in the room if it has missed some events.
|
| |
| |
| |
| |
| |
| | |
* Add a background update to clear tombstoned rooms from the directory
* use the ABC metaclass
|
| |
| |
| |
| | |
so that bg update routines can be async
|
|\ \
| | |
| | | |
Fix exceptions in the synchrotron worker log when events are rejected.
|
| | |
| | |
| | |
| | | |
Make it clearer how they behave in the face of rejected and/or missing events.
|
|\ \ \
| |/ /
|/| | |
Remove a bunch of unused code from event creation
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
create_new_client_event
|
| |/
| |
| |
| | |
... to make way for a new method which just returns the event ids
|
| |
| |
| |
| | |
Fixes #4026
|
|/ |
|
| |
|
| |
|
| |
|
|
|
|
| |
`Failed to upgrade database` is not helpful, and it's unlikely that UPGRADE.rst
has anything useful.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove redundant python2 support code
`str.decode()` doesn't exist on python3, so presumably this code was doing
nothing
* Filter out pushers with corrupt data
When we get a row with unparsable json, drop the row, rather than returning a
row with null `data`, which will then cause an explosion later on.
* Improve logging when we can't start a pusher
Log the ID to help us understand the problem
* Make email pusher setup more robust
We know we'll have a `data` member, since that comes from the database. What we
*don't* know is if that is a dict, and if that has a `brand` member, and if
that member is a string.
|
| |
|
|
|
|
|
| |
This encapsulates config for a given database and is the way to get new
connections.
|
|
|
| |
Signed-off-by: Werner Sembach <werner.sembach@fau.de>
|
| |
|
| |
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.7.0rc2 (2019-12-11)
=============================
Bugfixes
--------
- Fix incorrect error message for invalid requests when setting user's avatar URL. ([\#6497](https://github.com/matrix-org/synapse/issues/6497))
- Fix support for SQLite 3.7. ([\#6499](https://github.com/matrix-org/synapse/issues/6499))
- Fix regression where sending email push would not work when using a pusher worker. ([\#6507](https://github.com/matrix-org/synapse/issues/6507), [\#6509](https://github.com/matrix-org/synapse/issues/6509))
|
| |\
| | |
| | |
| | | |
release-v1.7.0
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| |/
|/|
| |
| | |
Stop the `update_client_ips` background job from recreating deleted devices.
|
| | |
|
| | |
|
| | |
|
|/
|
|
|
| |
Partial indices support was added in 3.8.0, so we need to use the
background updates that handles this correctly.
|
|\
| |
| | |
Pass in Database object to data stores.
|
| |
| |
| |
| | |
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|\ \
| |/
|/| |
Port SyncHandler to async/await
|
| | |
|
|\ \
| | |
| | |
| | | |
erikj/make_database_class
|
| |/ |
|
| | |
|
| | |
|
| | |
|
|/ |
|
|\
| |
| | |
Clean up SQLBaseStore private function usage
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Make synapse_port_db exit with a non-0 code if something failed
|
| |/ |
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
_update_auth_events_and_context_for_auth (#6468)
have_events was a map from event_id to rejection reason (or None) for events
which are in our local database. It was used as filter on the list of
event_ids being passed into get_events_as_list. However, since
get_events_as_list will ignore any event_ids that are unknown or rejected, we
can equivalently just leave it to get_events_as_list to do the filtering.
That means that we don't have to keep `have_events` up-to-date, and can use
`have_seen_events` instead of `get_seen_events_with_rejection` in the one place
we do need it.
|
|\
| |
| | |
Move things out of SQLBaseStore
|
| |
| |
| |
| |
| |
| | |
This reverts commit 00f0d67566cdfe8eae44aeae1c982c42a255cfcd.
Its going to get removed soon, so lets not make merge conflicts.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|\ \
| |/
|/| |
Filter state, events_before and events_after in /context requests
|
| |\ |
|
| | | |
|
|\ \ \
| | | |
| | | | |
make cross signing signature index non-unique
|
| | | | |
|
| | |/
| |/| |
|
| | |
| | |
| | |
| | |
| | | |
(hopefully)
... and deobfuscate the relevant bit of code.
|
|/ /
| |
| |
| |
| |
| |
| |
| | |
Implement part [MSC2228](https://github.com/matrix-org/matrix-doc/pull/2228). The parts that differ are:
* the feature is hidden behind a configuration flag (`enable_ephemeral_messages`)
* self-destruction doesn't happen for state events
* only implement support for the `m.self_destruct_after` field (not the `m.self_destruct` one)
* doesn't send synthetic redactions to clients because for this specific case we consider the clients to be able to destroy an event themselves, instead we just censor it (by pruning its JSON) in the database
|
| | |
|
| | |
|
|\ \ |
|
| | | |
|
| | | |
|
| |\ \ |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
There are lots of words in the comment as to why this is a good idea.
Fixes #6403.
|
| |\ \ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Synapse 1.6.0rc2 (2019-11-25)
=============================
Bugfixes
--------
- Fix a bug which could cause the background database update hander for event labels to get stuck in a loop raising exceptions. ([\#6407](https://github.com/matrix-org/synapse/issues/6407))
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Add some exception handling here so that events whose json cannot be parsed are
ignored rather than getting us stuck in a loop.
Fixes #6404.
|
| |/ / / |
|
| | | | |
|
| |/ / |
|
| |\ \
| | | |
| | | | |
Fix the SQL SELECT query in _paginate_room_events_txn
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|