summary refs log tree commit diff
diff options
context:
space:
mode:
authorErik Johnston <erik@matrix.org>2024-06-19 17:39:55 +0100
committerErik Johnston <erik@matrix.org>2024-06-19 17:39:55 +0100
commite1324ab2c1fef0c1a1fe39b68e5662c00aa372f3 (patch)
tree62af027ef3192f390b61c71da6b6bb6c42d0bf58
parentMerge remote-tracking branch 'origin/develop' into matrix-org-hotfixes (diff)
parentRevert "Handle large chain calc better (#17291)" (#17334) (diff)
downloadsynapse-e1324ab2c1fef0c1a1fe39b68e5662c00aa372f3.tar.xz
Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes
-rw-r--r--README.rst75
-rw-r--r--changelog.d/17198.misc1
-rw-r--r--changelog.d/17276.feature1
-rw-r--r--changelog.d/17291.misc1
-rw-r--r--changelog.d/17304.feature2
-rw-r--r--changelog.d/17324.misc1
-rw-r--r--changelog.d/17331.misc1
-rw-r--r--debian/changelog2
-rw-r--r--debian/register_new_matrix_user.ronn3
-rw-r--r--docker/conf/homeserver.yaml1
-rw-r--r--docs/admin_api/rooms.md4
-rw-r--r--synapse/_scripts/register_new_matrix_user.py30
-rw-r--r--synapse/rest/admin/rooms.py13
-rw-r--r--synapse/rest/client/sync.py2
-rw-r--r--synapse/storage/controllers/persist_events.py12
-rw-r--r--synapse/storage/databases/main/events.py249
-rw-r--r--synapse/storage/databases/main/room.py51
-rw-r--r--tests/rest/admin/test_room.py77
-rw-r--r--tests/rest/client/test_sync.py4
-rw-r--r--tests/storage/test_event_chain.py9
-rw-r--r--tests/storage/test_event_federation.py41
21 files changed, 304 insertions, 276 deletions
diff --git a/README.rst b/README.rst
index d13dc0cb78..db9b79a237 100644
--- a/README.rst
+++ b/README.rst
@@ -1,21 +1,34 @@
-=========================================================================
-Synapse |support| |development| |documentation| |license| |pypi| |python|
-=========================================================================
-
-Synapse is an open-source `Matrix <https://matrix.org/>`_ homeserver written and
-maintained by the Matrix.org Foundation. We began rapid development in 2014,
-reaching v1.0.0 in 2019. Development on Synapse and the Matrix protocol itself continues
-in earnest today.
-
-Briefly, Matrix is an open standard for communications on the internet, supporting
-federation, encryption and VoIP. Matrix.org has more to say about the `goals of the
-Matrix project <https://matrix.org/docs/guides/introduction>`_, and the `formal specification
-<https://spec.matrix.org/>`_ describes the technical details.
+.. image:: https://github.com/element-hq/product/assets/87339233/7abf477a-5277-47f3-be44-ea44917d8ed7
+   :height: 60px
+
+===========================================================================================================
+Element Synapse - Matrix homeserver implementation |support| |development| |documentation| |license| |pypi| |python|
+===========================================================================================================
+
+Synapse is an open source `Matrix <https://matrix.org>`_ homeserver
+implementation, written and maintained by `Element <https://element.io>`_.
+`Matrix <https://github.com/matrix-org>`_ is the open standard for
+secure and interoperable real time communications. You can directly run
+and manage the source code in this repository, available under an AGPL
+license. There is no support provided from Element unless you have a
+subscription.
+
+Subscription alternative
+------------------------
+
+Alternatively, for those that need an enterprise-ready solution, Element
+Server Suite (ESS) is `available as a subscription <https://element.io/pricing>`_.
+ESS builds on Synapse to offer a complete Matrix-based backend including the full
+`Admin Console product <https://element.io/enterprise-functionality/admin-console>`_,
+giving admins the power to easily manage an organization-wide
+deployment. It includes advanced identity management, auditing,
+moderation and data retention options as well as Long Term Support and
+SLAs. ESS can be used to support any Matrix-based frontend client.
 
 .. contents::
 
-Installing and configuration
-============================
+🛠️ Installing and configuration
+===============================
 
 The Synapse documentation describes `how to install Synapse <https://element-hq.github.io/synapse/latest/setup/installation.html>`_. We recommend using
 `Docker images <https://element-hq.github.io/synapse/latest/setup/installation.html#docker-images-and-ansible-playbooks>`_ or `Debian packages from Matrix.org
@@ -105,8 +118,8 @@ Following this advice ensures that even if an XSS is found in Synapse, the
 impact to other applications will be minimal.
 
 
-Testing a new installation
-==========================
+🧪 Testing a new installation
+============================
 
 The easiest way to try out your new Synapse installation is by connecting to it
 from a web client.
@@ -159,8 +172,20 @@ the form of::
 As when logging in, you will need to specify a "Custom server".  Specify your
 desired ``localpart`` in the 'User name' box.
 
-Troubleshooting and support
-===========================
+🎯 Troubleshooting and support
+=============================
+
+🚀 Professional support
+----------------------
+
+Enterprise quality support for Synapse including SLAs is available as part of an
+`Element Server Suite (ESS) <https://element.io/pricing>` subscription.
+
+If you are an existing ESS subscriber then you can raise a `support request <https://ems.element.io/support>`
+and access the `knowledge base <https://ems-docs.element.io>`.
+
+🤝 Community support
+-------------------
 
 The `Admin FAQ <https://element-hq.github.io/synapse/latest/usage/administration/admin_faq.html>`_
 includes tips on dealing with some common problems. For more details, see
@@ -176,8 +201,8 @@ issues for support requests, only for bug reports and feature requests.
 .. |docs| replace:: ``docs``
 .. _docs: docs
 
-Identity Servers
-================
+🪪 Identity Servers
+==================
 
 Identity servers have the job of mapping email addresses and other 3rd Party
 IDs (3PIDs) to Matrix user IDs, as well as verifying the ownership of 3PIDs
@@ -206,8 +231,8 @@ an email address with your account, or send an invite to another user via their
 email address.
 
 
-Development
-===========
+🛠️ Development
+==============
 
 We welcome contributions to Synapse from the community!
 The best place to get started is our
@@ -225,8 +250,8 @@ Alongside all that, join our developer community on Matrix:
 `#synapse-dev:matrix.org <https://matrix.to/#/#synapse-dev:matrix.org>`_, featuring real humans!
 
 
-.. |support| image:: https://img.shields.io/matrix/synapse:matrix.org?label=support&logo=matrix
-  :alt: (get support on #synapse:matrix.org)
+.. |support| image:: https://img.shields.io/badge/matrix-community%20support-success
+  :alt: (get community support in #synapse:matrix.org)
   :target: https://matrix.to/#/#synapse:matrix.org
 
 .. |development| image:: https://img.shields.io/matrix/synapse-dev:matrix.org?label=development&logo=matrix
diff --git a/changelog.d/17198.misc b/changelog.d/17198.misc
new file mode 100644
index 0000000000..8973eb2bac
--- /dev/null
+++ b/changelog.d/17198.misc
@@ -0,0 +1 @@
+Remove unused `expire_access_token` option in the Synapse Docker config file. Contributed by @AaronDewes.
\ No newline at end of file
diff --git a/changelog.d/17276.feature b/changelog.d/17276.feature
new file mode 100644
index 0000000000..a1edfae0aa
--- /dev/null
+++ b/changelog.d/17276.feature
@@ -0,0 +1 @@
+Filter for public and empty rooms added to Admin-API [List Room API](https://element-hq.github.io/synapse/latest/admin_api/rooms.html#list-room-api).
diff --git a/changelog.d/17291.misc b/changelog.d/17291.misc
deleted file mode 100644
index b1f89a324d..0000000000
--- a/changelog.d/17291.misc
+++ /dev/null
@@ -1 +0,0 @@
-Do not block event sending/receiving while calulating large event auth chains.
diff --git a/changelog.d/17304.feature b/changelog.d/17304.feature
new file mode 100644
index 0000000000..a969d8bf58
--- /dev/null
+++ b/changelog.d/17304.feature
@@ -0,0 +1,2 @@
+`register_new_matrix_user` now supports a --exists-ok flag to allow registration of users that already exist in the database.
+This is useful for scripts that bootstrap user accounts with initial passwords.
diff --git a/changelog.d/17324.misc b/changelog.d/17324.misc
new file mode 100644
index 0000000000..c0d7196ee0
--- /dev/null
+++ b/changelog.d/17324.misc
@@ -0,0 +1 @@
+Update the README with Element branding, improve headers and fix the #synapse:matrix.org support room link rendering.
\ No newline at end of file
diff --git a/changelog.d/17331.misc b/changelog.d/17331.misc
new file mode 100644
index 0000000000..79d3f33996
--- /dev/null
+++ b/changelog.d/17331.misc
@@ -0,0 +1 @@
+Change path of the experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync implementation to `/org.matrix.simplified_msc3575/sync` since our simplified API is slightly incompatible with what's in the current MSC.
diff --git a/debian/changelog b/debian/changelog
index 55e17bd868..731eacf20f 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,6 +1,6 @@
 matrix-synapse-py3 (1.109.0+nmu1) UNRELEASED; urgency=medium
 
-  * `register_new_matrix_user` now supports a --password-file flag.
+  * `register_new_matrix_user` now supports a --password-file and a --exists-ok flag.
 
  -- Synapse Packaging team <packages@matrix.org>  Tue, 18 Jun 2024 13:29:36 +0100
 
diff --git a/debian/register_new_matrix_user.ronn b/debian/register_new_matrix_user.ronn
index 963e67c004..aa305ec671 100644
--- a/debian/register_new_matrix_user.ronn
+++ b/debian/register_new_matrix_user.ronn
@@ -48,6 +48,9 @@ A sample YAML file accepted by `register_new_matrix_user` is described below:
     Shared secret as defined in server config file. This is an optional
     parameter as it can be also supplied via the YAML file.
 
+  * `--exists-ok`:
+    Do not fail if the user already exists. The user account will be not updated in this case.
+
   * `server_url`:
     URL of the home server. Defaults to 'https://localhost:8448'.
 
diff --git a/docker/conf/homeserver.yaml b/docker/conf/homeserver.yaml
index c412ba2e87..2890990705 100644
--- a/docker/conf/homeserver.yaml
+++ b/docker/conf/homeserver.yaml
@@ -176,7 +176,6 @@ app_service_config_files:
 {% endif %}
 
 macaroon_secret_key: "{{ SYNAPSE_MACAROON_SECRET_KEY }}"
-expire_access_token: False
 
 ## Signing Keys ##
 
diff --git a/docs/admin_api/rooms.md b/docs/admin_api/rooms.md
index 6935ec4a45..8e3a367e90 100644
--- a/docs/admin_api/rooms.md
+++ b/docs/admin_api/rooms.md
@@ -36,6 +36,10 @@ The following query parameters are available:
   - the room's name,
   - the local part of the room's canonical alias, or
   - the complete (local and server part) room's id (case sensitive).
+* `public_rooms` - Optional flag to filter public rooms. If `true`, only public rooms are queried. If `false`, public rooms are excluded from
+  the query. When the flag is absent (the default), **both** public and non-public rooms are included in the search results.
+* `empty_rooms` - Optional flag to filter empty rooms. A room is empty if joined_members is zero. If `true`, only empty rooms are queried. If `false`, empty rooms are excluded from
+  the query. When the flag is absent (the default), **both** empty and non-empty rooms are included in the search results.
 
   Defaults to no filtering.
 
diff --git a/synapse/_scripts/register_new_matrix_user.py b/synapse/_scripts/register_new_matrix_user.py
index 972b35e2dc..14cb21c7fb 100644
--- a/synapse/_scripts/register_new_matrix_user.py
+++ b/synapse/_scripts/register_new_matrix_user.py
@@ -52,6 +52,7 @@ def request_registration(
     user_type: Optional[str] = None,
     _print: Callable[[str], None] = print,
     exit: Callable[[int], None] = sys.exit,
+    exists_ok: bool = False,
 ) -> None:
     url = "%s/_synapse/admin/v1/register" % (server_location.rstrip("/"),)
 
@@ -97,6 +98,10 @@ def request_registration(
     r = requests.post(url, json=data)
 
     if r.status_code != 200:
+        response = r.json()
+        if exists_ok and response["errcode"] == "M_USER_IN_USE":
+            _print("User already exists. Skipping.")
+            return
         _print("ERROR! Received %d %s" % (r.status_code, r.reason))
         if 400 <= r.status_code < 500:
             try:
@@ -115,6 +120,7 @@ def register_new_user(
     shared_secret: str,
     admin: Optional[bool],
     user_type: Optional[str],
+    exists_ok: bool = False,
 ) -> None:
     if not user:
         try:
@@ -154,7 +160,13 @@ def register_new_user(
             admin = False
 
     request_registration(
-        user, password, server_location, shared_secret, bool(admin), user_type
+        user,
+        password,
+        server_location,
+        shared_secret,
+        bool(admin),
+        user_type,
+        exists_ok=exists_ok,
     )
 
 
@@ -173,6 +185,11 @@ def main() -> None:
         default=None,
         help="Local part of the new user. Will prompt if omitted.",
     )
+    parser.add_argument(
+        "--exists-ok",
+        action="store_true",
+        help="Do not fail if user already exists.",
+    )
     password_group = parser.add_mutually_exclusive_group()
     password_group.add_argument(
         "-p",
@@ -192,6 +209,7 @@ def main() -> None:
         default=None,
         help="User type as specified in synapse.api.constants.UserTypes",
     )
+
     admin_group = parser.add_mutually_exclusive_group()
     admin_group.add_argument(
         "-a",
@@ -281,7 +299,15 @@ def main() -> None:
     if args.admin or args.no_admin:
         admin = args.admin
 
-    register_new_user(args.user, password, server_url, secret, admin, args.user_type)
+    register_new_user(
+        args.user,
+        password,
+        server_url,
+        secret,
+        admin,
+        args.user_type,
+        exists_ok=args.exists_ok,
+    )
 
 
 def _read_file(file_path: Any, config_path: str) -> str:
diff --git a/synapse/rest/admin/rooms.py b/synapse/rest/admin/rooms.py
index 0d86a4e15f..01f9de9ffa 100644
--- a/synapse/rest/admin/rooms.py
+++ b/synapse/rest/admin/rooms.py
@@ -35,6 +35,7 @@ from synapse.http.servlet import (
     ResolveRoomIdMixin,
     RestServlet,
     assert_params_in_dict,
+    parse_boolean,
     parse_enum,
     parse_integer,
     parse_json,
@@ -242,13 +243,23 @@ class ListRoomRestServlet(RestServlet):
                 errcode=Codes.INVALID_PARAM,
             )
 
+        public_rooms = parse_boolean(request, "public_rooms")
+        empty_rooms = parse_boolean(request, "empty_rooms")
+
         direction = parse_enum(request, "dir", Direction, default=Direction.FORWARDS)
         reverse_order = True if direction == Direction.BACKWARDS else False
 
         # Return list of rooms according to parameters
         rooms, total_rooms = await self.store.get_rooms_paginate(
-            start, limit, order_by, reverse_order, search_term
+            start,
+            limit,
+            order_by,
+            reverse_order,
+            search_term,
+            public_rooms,
+            empty_rooms,
         )
+
         response = {
             # next_token should be opaque, so return a value the client can parse
             "offset": start,
diff --git a/synapse/rest/client/sync.py b/synapse/rest/client/sync.py
index 1b0ac20d94..b5ab0d8534 100644
--- a/synapse/rest/client/sync.py
+++ b/synapse/rest/client/sync.py
@@ -864,7 +864,7 @@ class SlidingSyncRestServlet(RestServlet):
     """
 
     PATTERNS = client_patterns(
-        "/org.matrix.msc3575/sync$", releases=[], v1=False, unstable=True
+        "/org.matrix.simplified_msc3575/sync$", releases=[], v1=False, unstable=True
     )
 
     def __init__(self, hs: "HomeServer"):
diff --git a/synapse/storage/controllers/persist_events.py b/synapse/storage/controllers/persist_events.py
index d0e015bf19..84699a2ee1 100644
--- a/synapse/storage/controllers/persist_events.py
+++ b/synapse/storage/controllers/persist_events.py
@@ -617,17 +617,6 @@ class EventsPersistenceStorageController:
                         room_id, chunk
                     )
 
-            with Measure(self._clock, "calculate_chain_cover_index_for_events"):
-                # We now calculate chain ID/sequence numbers for any state events we're
-                # persisting. We ignore out of band memberships as we're not in the room
-                # and won't have their auth chain (we'll fix it up later if we join the
-                # room).
-                #
-                # See: docs/auth_chain_difference_algorithm.md
-                new_event_links = await self.persist_events_store.calculate_chain_cover_index_for_events(
-                    room_id, [e for e, _ in chunk]
-                )
-
             await self.persist_events_store._persist_events_and_state_updates(
                 room_id,
                 chunk,
@@ -635,7 +624,6 @@ class EventsPersistenceStorageController:
                 new_forward_extremities=new_forward_extremities,
                 use_negative_stream_ordering=backfilled,
                 inhibit_local_membership_updates=backfilled,
-                new_event_links=new_event_links,
             )
 
         return replaced_events
diff --git a/synapse/storage/databases/main/events.py b/synapse/storage/databases/main/events.py
index c6df13c064..66428e6c8e 100644
--- a/synapse/storage/databases/main/events.py
+++ b/synapse/storage/databases/main/events.py
@@ -34,6 +34,7 @@ from typing import (
     Optional,
     Set,
     Tuple,
+    Union,
     cast,
 )
 
@@ -99,23 +100,6 @@ class DeltaState:
         return not self.to_delete and not self.to_insert and not self.no_longer_in_room
 
 
-@attr.s(slots=True, auto_attribs=True)
-class NewEventChainLinks:
-    """Information about new auth chain links that need to be added to the DB.
-
-    Attributes:
-        chain_id, sequence_number: the IDs corresponding to the event being
-            inserted, and the starting point of the links
-        links: Lists the links that need to be added, 2-tuple of the chain
-            ID/sequence number of the end point of the link.
-    """
-
-    chain_id: int
-    sequence_number: int
-
-    links: List[Tuple[int, int]] = attr.Factory(list)
-
-
 class PersistEventsStore:
     """Contains all the functions for writing events to the database.
 
@@ -164,7 +148,6 @@ class PersistEventsStore:
         *,
         state_delta_for_room: Optional[DeltaState],
         new_forward_extremities: Optional[Set[str]],
-        new_event_links: Dict[str, NewEventChainLinks],
         use_negative_stream_ordering: bool = False,
         inhibit_local_membership_updates: bool = False,
     ) -> None:
@@ -234,7 +217,6 @@ class PersistEventsStore:
                 inhibit_local_membership_updates=inhibit_local_membership_updates,
                 state_delta_for_room=state_delta_for_room,
                 new_forward_extremities=new_forward_extremities,
-                new_event_links=new_event_links,
             )
             persist_event_counter.inc(len(events_and_contexts))
 
@@ -261,87 +243,6 @@ class PersistEventsStore:
                     (room_id,), frozenset(new_forward_extremities)
                 )
 
-    async def calculate_chain_cover_index_for_events(
-        self, room_id: str, events: Collection[EventBase]
-    ) -> Dict[str, NewEventChainLinks]:
-        # Filter to state events, and ensure there are no duplicates.
-        state_events = []
-        seen_events = set()
-        for event in events:
-            if not event.is_state() or event.event_id in seen_events:
-                continue
-
-            state_events.append(event)
-            seen_events.add(event.event_id)
-
-        if not state_events:
-            return {}
-
-        return await self.db_pool.runInteraction(
-            "_calculate_chain_cover_index_for_events",
-            self.calculate_chain_cover_index_for_events_txn,
-            room_id,
-            state_events,
-        )
-
-    def calculate_chain_cover_index_for_events_txn(
-        self, txn: LoggingTransaction, room_id: str, state_events: Collection[EventBase]
-    ) -> Dict[str, NewEventChainLinks]:
-        # We now calculate chain ID/sequence numbers for any state events we're
-        # persisting. We ignore out of band memberships as we're not in the room
-        # and won't have their auth chain (we'll fix it up later if we join the
-        # room).
-        #
-        # See: docs/auth_chain_difference_algorithm.md
-
-        # We ignore legacy rooms that we aren't filling the chain cover index
-        # for.
-        row = self.db_pool.simple_select_one_txn(
-            txn,
-            table="rooms",
-            keyvalues={"room_id": room_id},
-            retcols=("room_id", "has_auth_chain_index"),
-            allow_none=True,
-        )
-        if row is None:
-            return {}
-
-        # Filter out already persisted events.
-        rows = self.db_pool.simple_select_many_txn(
-            txn,
-            table="events",
-            column="event_id",
-            iterable=[e.event_id for e in state_events],
-            keyvalues={},
-            retcols=("event_id",),
-        )
-        already_persisted_events = {event_id for event_id, in rows}
-        state_events = [
-            event
-            for event in state_events
-            if event.event_id in already_persisted_events
-        ]
-
-        if not state_events:
-            return {}
-
-        # We need to know the type/state_key and auth events of the events we're
-        # calculating chain IDs for. We don't rely on having the full Event
-        # instances as we'll potentially be pulling more events from the DB and
-        # we don't need the overhead of fetching/parsing the full event JSON.
-        event_to_types = {e.event_id: (e.type, e.state_key) for e in state_events}
-        event_to_auth_chain = {e.event_id: e.auth_event_ids() for e in state_events}
-        event_to_room_id = {e.event_id: e.room_id for e in state_events}
-
-        return self._calculate_chain_cover_index(
-            txn,
-            self.db_pool,
-            self.store.event_chain_id_gen,
-            event_to_room_id,
-            event_to_types,
-            event_to_auth_chain,
-        )
-
     async def _get_events_which_are_prevs(self, event_ids: Iterable[str]) -> List[str]:
         """Filter the supplied list of event_ids to get those which are prev_events of
         existing (non-outlier/rejected) events.
@@ -457,7 +358,6 @@ class PersistEventsStore:
         inhibit_local_membership_updates: bool,
         state_delta_for_room: Optional[DeltaState],
         new_forward_extremities: Optional[Set[str]],
-        new_event_links: Dict[str, NewEventChainLinks],
     ) -> None:
         """Insert some number of room events into the necessary database tables.
 
@@ -566,9 +466,7 @@ class PersistEventsStore:
         # Insert into event_to_state_groups.
         self._store_event_state_mappings_txn(txn, events_and_contexts)
 
-        self._persist_event_auth_chain_txn(
-            txn, [e for e, _ in events_and_contexts], new_event_links
-        )
+        self._persist_event_auth_chain_txn(txn, [e for e, _ in events_and_contexts])
 
         # _store_rejected_events_txn filters out any events which were
         # rejected, and returns the filtered list.
@@ -598,7 +496,6 @@ class PersistEventsStore:
         self,
         txn: LoggingTransaction,
         events: List[EventBase],
-        new_event_links: Dict[str, NewEventChainLinks],
     ) -> None:
         # We only care about state events, so this if there are no state events.
         if not any(e.is_state() for e in events):
@@ -622,40 +519,62 @@ class PersistEventsStore:
             ],
         )
 
-        if new_event_links:
-            self._persist_chain_cover_index(txn, self.db_pool, new_event_links)
+        # We now calculate chain ID/sequence numbers for any state events we're
+        # persisting. We ignore out of band memberships as we're not in the room
+        # and won't have their auth chain (we'll fix it up later if we join the
+        # room).
+        #
+        # See: docs/auth_chain_difference_algorithm.md
 
-    @classmethod
-    def _add_chain_cover_index(
-        cls,
-        txn: LoggingTransaction,
-        db_pool: DatabasePool,
-        event_chain_id_gen: SequenceGenerator,
-        event_to_room_id: Dict[str, str],
-        event_to_types: Dict[str, Tuple[str, str]],
-        event_to_auth_chain: Dict[str, StrCollection],
-    ) -> None:
-        """Calculate and persist the chain cover index for the given events.
+        # We ignore legacy rooms that we aren't filling the chain cover index
+        # for.
+        rows = cast(
+            List[Tuple[str, Optional[Union[int, bool]]]],
+            self.db_pool.simple_select_many_txn(
+                txn,
+                table="rooms",
+                column="room_id",
+                iterable={event.room_id for event in events if event.is_state()},
+                keyvalues={},
+                retcols=("room_id", "has_auth_chain_index"),
+            ),
+        )
+        rooms_using_chain_index = {
+            room_id for room_id, has_auth_chain_index in rows if has_auth_chain_index
+        }
 
-        Args:
-            event_to_room_id: Event ID to the room ID of the event
-            event_to_types: Event ID to type and state_key of the event
-            event_to_auth_chain: Event ID to list of auth event IDs of the
-                event (events with no auth events can be excluded).
-        """
+        state_events = {
+            event.event_id: event
+            for event in events
+            if event.is_state() and event.room_id in rooms_using_chain_index
+        }
+
+        if not state_events:
+            return
+
+        # We need to know the type/state_key and auth events of the events we're
+        # calculating chain IDs for. We don't rely on having the full Event
+        # instances as we'll potentially be pulling more events from the DB and
+        # we don't need the overhead of fetching/parsing the full event JSON.
+        event_to_types = {
+            e.event_id: (e.type, e.state_key) for e in state_events.values()
+        }
+        event_to_auth_chain = {
+            e.event_id: e.auth_event_ids() for e in state_events.values()
+        }
+        event_to_room_id = {e.event_id: e.room_id for e in state_events.values()}
 
-        new_event_links = cls._calculate_chain_cover_index(
+        self._add_chain_cover_index(
             txn,
-            db_pool,
-            event_chain_id_gen,
+            self.db_pool,
+            self.store.event_chain_id_gen,
             event_to_room_id,
             event_to_types,
             event_to_auth_chain,
         )
-        cls._persist_chain_cover_index(txn, db_pool, new_event_links)
 
     @classmethod
-    def _calculate_chain_cover_index(
+    def _add_chain_cover_index(
         cls,
         txn: LoggingTransaction,
         db_pool: DatabasePool,
@@ -663,7 +582,7 @@ class PersistEventsStore:
         event_to_room_id: Dict[str, str],
         event_to_types: Dict[str, Tuple[str, str]],
         event_to_auth_chain: Dict[str, StrCollection],
-    ) -> Dict[str, NewEventChainLinks]:
+    ) -> None:
         """Calculate the chain cover index for the given events.
 
         Args:
@@ -671,10 +590,6 @@ class PersistEventsStore:
             event_to_types: Event ID to type and state_key of the event
             event_to_auth_chain: Event ID to list of auth event IDs of the
                 event (events with no auth events can be excluded).
-
-        Returns:
-            A mapping with any new auth chain links we need to add, keyed by
-            event ID.
         """
 
         # Map from event ID to chain ID/sequence number.
@@ -793,11 +708,11 @@ class PersistEventsStore:
                     room_id = event_to_room_id.get(event_id)
                     if room_id:
                         e_type, state_key = event_to_types[event_id]
-                        db_pool.simple_upsert_txn(
+                        db_pool.simple_insert_txn(
                             txn,
                             table="event_auth_chain_to_calculate",
-                            keyvalues={"event_id": event_id},
                             values={
+                                "event_id": event_id,
                                 "room_id": room_id,
                                 "type": e_type,
                                 "state_key": state_key,
@@ -809,7 +724,7 @@ class PersistEventsStore:
                     break
 
         if not events_to_calc_chain_id_for:
-            return {}
+            return
 
         # Allocate chain ID/sequence numbers to each new event.
         new_chain_tuples = cls._allocate_chain_ids(
@@ -824,10 +739,23 @@ class PersistEventsStore:
         )
         chain_map.update(new_chain_tuples)
 
-        to_return = {
-            event_id: NewEventChainLinks(chain_id, sequence_number)
-            for event_id, (chain_id, sequence_number) in new_chain_tuples.items()
-        }
+        db_pool.simple_insert_many_txn(
+            txn,
+            table="event_auth_chains",
+            keys=("event_id", "chain_id", "sequence_number"),
+            values=[
+                (event_id, c_id, seq)
+                for event_id, (c_id, seq) in new_chain_tuples.items()
+            ],
+        )
+
+        db_pool.simple_delete_many_txn(
+            txn,
+            table="event_auth_chain_to_calculate",
+            keyvalues={},
+            column="event_id",
+            values=new_chain_tuples,
+        )
 
         # Now we need to calculate any new links between chains caused by
         # the new events.
@@ -897,38 +825,10 @@ class PersistEventsStore:
                 auth_chain_id, auth_sequence_number = chain_map[auth_id]
 
                 # Step 2a, add link between the event and auth event
-                to_return[event_id].links.append((auth_chain_id, auth_sequence_number))
                 chain_links.add_link(
                     (chain_id, sequence_number), (auth_chain_id, auth_sequence_number)
                 )
 
-        return to_return
-
-    @classmethod
-    def _persist_chain_cover_index(
-        cls,
-        txn: LoggingTransaction,
-        db_pool: DatabasePool,
-        new_event_links: Dict[str, NewEventChainLinks],
-    ) -> None:
-        db_pool.simple_insert_many_txn(
-            txn,
-            table="event_auth_chains",
-            keys=("event_id", "chain_id", "sequence_number"),
-            values=[
-                (event_id, new_links.chain_id, new_links.sequence_number)
-                for event_id, new_links in new_event_links.items()
-            ],
-        )
-
-        db_pool.simple_delete_many_txn(
-            txn,
-            table="event_auth_chain_to_calculate",
-            keyvalues={},
-            column="event_id",
-            values=new_event_links,
-        )
-
         db_pool.simple_insert_many_txn(
             txn,
             table="event_auth_chain_links",
@@ -938,16 +838,7 @@ class PersistEventsStore:
                 "target_chain_id",
                 "target_sequence_number",
             ),
-            values=[
-                (
-                    new_links.chain_id,
-                    new_links.sequence_number,
-                    target_chain_id,
-                    target_sequence_number,
-                )
-                for new_links in new_event_links.values()
-                for (target_chain_id, target_sequence_number) in new_links.links
-            ],
+            values=list(chain_links.get_additions()),
         )
 
     @staticmethod
diff --git a/synapse/storage/databases/main/room.py b/synapse/storage/databases/main/room.py
index b8a71c803e..d5627b1d6e 100644
--- a/synapse/storage/databases/main/room.py
+++ b/synapse/storage/databases/main/room.py
@@ -606,6 +606,8 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
         order_by: str,
         reverse_order: bool,
         search_term: Optional[str],
+        public_rooms: Optional[bool],
+        empty_rooms: Optional[bool],
     ) -> Tuple[List[Dict[str, Any]], int]:
         """Function to retrieve a paginated list of rooms as json.
 
@@ -617,30 +619,49 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
             search_term: a string to filter room names,
                 canonical alias and room ids by.
                 Room ID must match exactly. Canonical alias must match a substring of the local part.
+            public_rooms: Optional flag to filter public and non-public rooms. If true, public rooms are queried.
+                    if false, public rooms are excluded from the query. When it is
+                    none (the default), both public rooms and none-public-rooms are queried.
+            empty_rooms: Optional flag to filter empty and non-empty rooms.
+                    A room is empty if joined_members is zero.
+                    If true, empty rooms are queried.
+                    if false, empty rooms are excluded from the query. When it is
+                    none (the default), both empty rooms and none-empty rooms are queried.
         Returns:
             A list of room dicts and an integer representing the total number of
             rooms that exist given this query
         """
         # Filter room names by a string
-        where_statement = ""
-        search_pattern: List[object] = []
+        filter_ = []
+        where_args = []
         if search_term:
-            where_statement = """
-                WHERE LOWER(state.name) LIKE ?
-                OR LOWER(state.canonical_alias) LIKE ?
-                OR state.room_id = ?
-            """
+            filter_ = [
+                "LOWER(state.name) LIKE ? OR "
+                "LOWER(state.canonical_alias) LIKE ? OR "
+                "state.room_id = ?"
+            ]
 
             # Our postgres db driver converts ? -> %s in SQL strings as that's the
             # placeholder for postgres.
             # HOWEVER, if you put a % into your SQL then everything goes wibbly.
             # To get around this, we're going to surround search_term with %'s
             # before giving it to the database in python instead
-            search_pattern = [
-                "%" + search_term.lower() + "%",
-                "#%" + search_term.lower() + "%:%",
+            where_args = [
+                f"%{search_term.lower()}%",
+                f"#%{search_term.lower()}%:%",
                 search_term,
             ]
+        if public_rooms is not None:
+            filter_arg = "1" if public_rooms else "0"
+            filter_.append(f"rooms.is_public = '{filter_arg}'")
+
+        if empty_rooms is not None:
+            if empty_rooms:
+                filter_.append("curr.joined_members = 0")
+            else:
+                filter_.append("curr.joined_members <> 0")
+
+        where_clause = "WHERE " + " AND ".join(filter_) if len(filter_) > 0 else ""
 
         # Set ordering
         if RoomSortOrder(order_by) == RoomSortOrder.SIZE:
@@ -717,7 +738,7 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
             LIMIT ?
             OFFSET ?
         """.format(
-            where=where_statement,
+            where=where_clause,
             order_by=order_by_column,
             direction="ASC" if order_by_asc else "DESC",
         )
@@ -726,10 +747,12 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
         count_sql = """
             SELECT count(*) FROM (
               SELECT room_id FROM room_stats_state state
+              INNER JOIN room_stats_current curr USING (room_id)
+              INNER JOIN rooms USING (room_id)
               {where}
             ) AS get_room_ids
         """.format(
-            where=where_statement,
+            where=where_clause,
         )
 
         def _get_rooms_paginate_txn(
@@ -737,7 +760,7 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
         ) -> Tuple[List[Dict[str, Any]], int]:
             # Add the search term into the WHERE clause
             # and execute the data query
-            txn.execute(info_sql, search_pattern + [limit, start])
+            txn.execute(info_sql, where_args + [limit, start])
 
             # Refactor room query data into a structured dictionary
             rooms = []
@@ -767,7 +790,7 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
             # Execute the count query
 
             # Add the search term into the WHERE clause if present
-            txn.execute(count_sql, search_pattern)
+            txn.execute(count_sql, where_args)
 
             room_count = cast(Tuple[int], txn.fetchone())
             return rooms, room_count[0]
diff --git a/tests/rest/admin/test_room.py b/tests/rest/admin/test_room.py
index 7562747260..95ed736451 100644
--- a/tests/rest/admin/test_room.py
+++ b/tests/rest/admin/test_room.py
@@ -1795,6 +1795,83 @@ class RoomTestCase(unittest.HomeserverTestCase):
         self.assertEqual(room_id, channel.json_body["rooms"][0].get("room_id"))
         self.assertEqual("ж", channel.json_body["rooms"][0].get("name"))
 
+    def test_filter_public_rooms(self) -> None:
+        self.helper.create_room_as(
+            self.admin_user, tok=self.admin_user_tok, is_public=True
+        )
+        self.helper.create_room_as(
+            self.admin_user, tok=self.admin_user_tok, is_public=True
+        )
+        self.helper.create_room_as(
+            self.admin_user, tok=self.admin_user_tok, is_public=False
+        )
+
+        response = self.make_request(
+            "GET",
+            "/_synapse/admin/v1/rooms",
+            access_token=self.admin_user_tok,
+        )
+        self.assertEqual(200, response.code, msg=response.json_body)
+        self.assertEqual(3, response.json_body["total_rooms"])
+        self.assertEqual(3, len(response.json_body["rooms"]))
+
+        response = self.make_request(
+            "GET",
+            "/_synapse/admin/v1/rooms?public_rooms=true",
+            access_token=self.admin_user_tok,
+        )
+        self.assertEqual(200, response.code, msg=response.json_body)
+        self.assertEqual(2, response.json_body["total_rooms"])
+        self.assertEqual(2, len(response.json_body["rooms"]))
+
+        response = self.make_request(
+            "GET",
+            "/_synapse/admin/v1/rooms?public_rooms=false",
+            access_token=self.admin_user_tok,
+        )
+        self.assertEqual(200, response.code, msg=response.json_body)
+        self.assertEqual(1, response.json_body["total_rooms"])
+        self.assertEqual(1, len(response.json_body["rooms"]))
+
+    def test_filter_empty_rooms(self) -> None:
+        self.helper.create_room_as(
+            self.admin_user, tok=self.admin_user_tok, is_public=True
+        )
+        self.helper.create_room_as(
+            self.admin_user, tok=self.admin_user_tok, is_public=True
+        )
+        room_id = self.helper.create_room_as(
+            self.admin_user, tok=self.admin_user_tok, is_public=False
+        )
+        self.helper.leave(room_id, self.admin_user, tok=self.admin_user_tok)
+
+        response = self.make_request(
+            "GET",
+            "/_synapse/admin/v1/rooms",
+            access_token=self.admin_user_tok,
+        )
+        self.assertEqual(200, response.code, msg=response.json_body)
+        self.assertEqual(3, response.json_body["total_rooms"])
+        self.assertEqual(3, len(response.json_body["rooms"]))
+
+        response = self.make_request(
+            "GET",
+            "/_synapse/admin/v1/rooms?empty_rooms=false",
+            access_token=self.admin_user_tok,
+        )
+        self.assertEqual(200, response.code, msg=response.json_body)
+        self.assertEqual(2, response.json_body["total_rooms"])
+        self.assertEqual(2, len(response.json_body["rooms"]))
+
+        response = self.make_request(
+            "GET",
+            "/_synapse/admin/v1/rooms?empty_rooms=true",
+            access_token=self.admin_user_tok,
+        )
+        self.assertEqual(200, response.code, msg=response.json_body)
+        self.assertEqual(1, response.json_body["total_rooms"])
+        self.assertEqual(1, len(response.json_body["rooms"]))
+
     def test_single_room(self) -> None:
         """Test that a single room can be requested correctly"""
         # Create two test rooms
diff --git a/tests/rest/client/test_sync.py b/tests/rest/client/test_sync.py
index 2b06767b8a..5195659ec2 100644
--- a/tests/rest/client/test_sync.py
+++ b/tests/rest/client/test_sync.py
@@ -1228,7 +1228,9 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
 
     def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
         self.store = hs.get_datastores().main
-        self.sync_endpoint = "/_matrix/client/unstable/org.matrix.msc3575/sync"
+        self.sync_endpoint = (
+            "/_matrix/client/unstable/org.matrix.simplified_msc3575/sync"
+        )
         self.store = hs.get_datastores().main
         self.event_sources = hs.get_event_sources()
 
diff --git a/tests/storage/test_event_chain.py b/tests/storage/test_event_chain.py
index c4e216c308..81feb3ec29 100644
--- a/tests/storage/test_event_chain.py
+++ b/tests/storage/test_event_chain.py
@@ -447,14 +447,7 @@ class EventChainStoreTestCase(HomeserverTestCase):
             )
 
             # Actually call the function that calculates the auth chain stuff.
-            new_event_links = (
-                persist_events_store.calculate_chain_cover_index_for_events_txn(
-                    txn, events[0].room_id, [e for e in events if e.is_state()]
-                )
-            )
-            persist_events_store._persist_event_auth_chain_txn(
-                txn, events, new_event_links
-            )
+            persist_events_store._persist_event_auth_chain_txn(txn, events)
 
         self.get_success(
             persist_events_store.db_pool.runInteraction(
diff --git a/tests/storage/test_event_federation.py b/tests/storage/test_event_federation.py
index 1832a23714..0a6253e22c 100644
--- a/tests/storage/test_event_federation.py
+++ b/tests/storage/test_event_federation.py
@@ -365,19 +365,12 @@ class EventFederationWorkerStoreTestCase(tests.unittest.HomeserverTestCase):
                     },
                 )
 
-            events = [
-                cast(EventBase, FakeEvent(event_id, room_id, AUTH_GRAPH[event_id]))
-                for event_id in AUTH_GRAPH
-            ]
-            new_event_links = (
-                self.persist_events.calculate_chain_cover_index_for_events_txn(
-                    txn, room_id, [e for e in events if e.is_state()]
-                )
-            )
             self.persist_events._persist_event_auth_chain_txn(
                 txn,
-                events,
-                new_event_links,
+                [
+                    cast(EventBase, FakeEvent(event_id, room_id, AUTH_GRAPH[event_id]))
+                    for event_id in AUTH_GRAPH
+                ],
             )
 
         self.get_success(
@@ -635,20 +628,13 @@ class EventFederationWorkerStoreTestCase(tests.unittest.HomeserverTestCase):
                 )
 
             # Insert all events apart from 'B'
-            events = [
-                cast(EventBase, FakeEvent(event_id, room_id, auth_graph[event_id]))
-                for event_id in auth_graph
-                if event_id != "b"
-            ]
-            new_event_links = (
-                self.persist_events.calculate_chain_cover_index_for_events_txn(
-                    txn, room_id, [e for e in events if e.is_state()]
-                )
-            )
             self.persist_events._persist_event_auth_chain_txn(
                 txn,
-                events,
-                new_event_links,
+                [
+                    cast(EventBase, FakeEvent(event_id, room_id, auth_graph[event_id]))
+                    for event_id in auth_graph
+                    if event_id != "b"
+                ],
             )
 
             # Now we insert the event 'B' without a chain cover, by temporarily
@@ -661,14 +647,9 @@ class EventFederationWorkerStoreTestCase(tests.unittest.HomeserverTestCase):
                 updatevalues={"has_auth_chain_index": False},
             )
 
-            events = [cast(EventBase, FakeEvent("b", room_id, auth_graph["b"]))]
-            new_event_links = (
-                self.persist_events.calculate_chain_cover_index_for_events_txn(
-                    txn, room_id, [e for e in events if e.is_state()]
-                )
-            )
             self.persist_events._persist_event_auth_chain_txn(
-                txn, events, new_event_links
+                txn,
+                [cast(EventBase, FakeEvent("b", room_id, auth_graph["b"]))],
             )
 
             self.store.db_pool.simple_update_txn(