summary refs log tree commit diff
path: root/synapse/storage/databases/main/presence.py
diff options
context:
space:
mode:
authorMarek Matys <57749215+thermaq@users.noreply.github.com>2021-05-21 13:02:06 +0200
committerGitHub <noreply@github.com>2021-05-21 12:02:06 +0100
commit6a8643ff3da905568e3f2ec047182753352e39d1 (patch)
treef329eb78abd5a80f779be75df5d59b8442ed10d7 /synapse/storage/databases/main/presence.py
parentAdd a batching queue implementation. (#10017) (diff)
downloadsynapse-6a8643ff3da905568e3f2ec047182753352e39d1.tar.xz
Fixed removal of new presence stream states (#10014)
Fixes: https://github.com/matrix-org/synapse/issues/9962

This is a fix for above problem.

I fixed it by swaping the order of insertion of new records and deletion of old ones. This ensures that we don't delete fresh database records as we do deletes before inserts.

Signed-off-by: Marek Matys <themarcq@gmail.com>
Diffstat (limited to 'synapse/storage/databases/main/presence.py')
-rw-r--r--synapse/storage/databases/main/presence.py18
1 files changed, 9 insertions, 9 deletions
diff --git a/synapse/storage/databases/main/presence.py b/synapse/storage/databases/main/presence.py
index 669a2af884..6a2baa7841 100644
--- a/synapse/storage/databases/main/presence.py
+++ b/synapse/storage/databases/main/presence.py
@@ -97,6 +97,15 @@ class PresenceStore(SQLBaseStore):
             )
             txn.call_after(self._get_presence_for_user.invalidate, (state.user_id,))
 
+        # Delete old rows to stop database from getting really big
+        sql = "DELETE FROM presence_stream WHERE stream_id < ? AND "
+
+        for states in batch_iter(presence_states, 50):
+            clause, args = make_in_list_sql_clause(
+                self.database_engine, "user_id", [s.user_id for s in states]
+            )
+            txn.execute(sql + clause, [stream_id] + list(args))
+
         # Actually insert new rows
         self.db_pool.simple_insert_many_txn(
             txn,
@@ -117,15 +126,6 @@ class PresenceStore(SQLBaseStore):
             ],
         )
 
-        # Delete old rows to stop database from getting really big
-        sql = "DELETE FROM presence_stream WHERE stream_id < ? AND "
-
-        for states in batch_iter(presence_states, 50):
-            clause, args = make_in_list_sql_clause(
-                self.database_engine, "user_id", [s.user_id for s in states]
-            )
-            txn.execute(sql + clause, [stream_id] + list(args))
-
     async def get_all_presence_updates(
         self, instance_name: str, last_id: int, current_id: int, limit: int
     ) -> Tuple[List[Tuple[int, list]], int, bool]: