diff options
author | Erik Johnston <erik@matrix.org> | 2020-06-09 16:28:57 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2020-06-09 16:28:57 +0100 |
commit | 664409b1694f102b3fd03d825ae82b31a4311560 (patch) | |
tree | aba5fdf5a1ad449e0924f2b56871284b97752379 /synapse/replication/slave | |
parent | Convert the registration handler to async/await. (#7649) (diff) | |
download | synapse-664409b1694f102b3fd03d825ae82b31a4311560.tar.xz |
Fix bug in account data replication stream. (#7656)
* Ensure account data stream IDs are unique. The account data stream is shared between three tables, and the maximum allocated ID was tracked in a dedicated table. Updating the max ID happened outside the transaction that allocated the ID, leading to a race where if the server was restarted then the same ID could be allocated but the max ID failed to be updated, leading it to be reused. The ID generators have support for tracking across multiple tables, so we may as well use that instead of a dedicated table. * Fix bug in account data replication stream. If the same stream ID was used in both global and room account data then the getting updates for the replication stream would fail due to `heapq.merge(..)` trying to compare a `str` with a `None`. (This is because you'd have two rows like `(534, '!room')` and `(534, None)` from the room and global account data tables). Fix is just to order by stream ID, since we don't rely on the ordering beyond that. The bug where stream IDs can be reused should be fixed now, so this case shouldn't happen going forward. Fixes #7617
Diffstat (limited to 'synapse/replication/slave')
-rw-r--r-- | synapse/replication/slave/storage/account_data.py | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/synapse/replication/slave/storage/account_data.py b/synapse/replication/slave/storage/account_data.py index 2a4f5c7cfd..9db6c62bc7 100644 --- a/synapse/replication/slave/storage/account_data.py +++ b/synapse/replication/slave/storage/account_data.py @@ -24,7 +24,13 @@ from synapse.storage.database import Database class SlavedAccountDataStore(TagsWorkerStore, AccountDataWorkerStore, BaseSlavedStore): def __init__(self, database: Database, db_conn, hs): self._account_data_id_gen = SlavedIdTracker( - db_conn, "account_data_max_stream_id", "stream_id" + db_conn, + "account_data", + "stream_id", + extra_tables=[ + ("room_account_data", "stream_id"), + ("room_tags_revisions", "stream_id"), + ], ) super(SlavedAccountDataStore, self).__init__(database, db_conn, hs) |