summary refs log tree commit diff
path: root/synapse/replication/tcp
diff options
context:
space:
mode:
authorNick Mills-Barrett <nick@beeper.com>2023-01-04 11:49:26 +0000
committerGitHub <noreply@github.com>2023-01-04 11:49:26 +0000
commitdb1cfe9c80a707995fcad8f3faa839acb247068a (patch)
tree691c711006765e770056d97db624043d5b87b781 /synapse/replication/tcp
parentAdd experimental support for MSC3391: deleting account data (#14714) (diff)
downloadsynapse-db1cfe9c80a707995fcad8f3faa839acb247068a.tar.xz
Update all stream IDs after processing replication rows (#14723)
This creates a new store method, `process_replication_position` that
is called after `process_replication_rows`. By moving stream ID advances
here this guarantees any relevant cache invalidations will have been
applied before the stream is advanced.

This avoids race conditions where Python switches between threads mid
way through processing the `process_replication_rows` method where stream
IDs may be advanced before caches are invalidated due to class resolution
ordering.

See this comment/issue for further discussion:
	https://github.com/matrix-org/synapse/issues/14158#issuecomment-1344048703
Diffstat (limited to 'synapse/replication/tcp')
-rw-r--r--synapse/replication/tcp/client.py3
1 files changed, 3 insertions, 0 deletions
diff --git a/synapse/replication/tcp/client.py b/synapse/replication/tcp/client.py
index 658d89210d..b5e40da533 100644
--- a/synapse/replication/tcp/client.py
+++ b/synapse/replication/tcp/client.py
@@ -152,6 +152,9 @@ class ReplicationDataHandler:
             rows: a list of Stream.ROW_TYPE objects as returned by Stream.parse_row.
         """
         self.store.process_replication_rows(stream_name, instance_name, token, rows)
+        # NOTE: this must be called after process_replication_rows to ensure any
+        # cache invalidations are first handled before any stream ID advances.
+        self.store.process_replication_position(stream_name, instance_name, token)
 
         if self.send_handler:
             await self.send_handler.process_replication_rows(stream_name, token, rows)