| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we get a partial_state response from send_join, store information in the
database about it:
* store a record about the room as a whole having partial state, and stash the
list of member servers too.
* flag the join event itself as having partial state
* also, for any new events whose prev-events are partial-stated, note that
they will *also* be partial-stated.
We don't yet make any attempt to interpret this data, so API calls (and a bunch
of other things) are just going to get incorrect data.
|
|
|
|
|
|
|
| |
Don't attempt to add non-string `value`s to `event_search` and add a
background update to clear out bad rows from `event_search` when
using sqlite.
Signed-off-by: Sean Quah <seanq@element.io>
|
|
|
|
| |
use. The slowness existed since the initial implementation of refresh tokens. (#12056)
|
|
|
|
| |
users. (#11655)
|
|
|
| |
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
|
|
|
|
| |
... and start populating them for new events
|
|
|
| |
This is a follow-up to #10565.
|
|
|
|
| |
Preparation for dropping this table altogether. Part of #6574.
|
|
|
| |
this field is never read, so we may as well stop populating it.
|
|
|
| |
As a step towards allowing back-channel logout for OIDC.
|
|
|
|
|
| |
We're going to add a `state_key` column to the `events` table, so we need to
add some disambiguation to queries which use it.
|
|
|
|
| |
refresh tokens are in use. (#11425)
|
|
|
|
|
| |
(#11421)
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
|
|
|
|
|
| |
Instead of only known relation types. This also reworks the background
update for thread relations to crawl events and search for any relation
type, not just threaded relations.
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.47.0rc3 (2021-11-16)
==============================
Bugfixes
--------
- Fix a bug introduced in 1.47.0rc1 which caused worker processes to not halt startup in the presence of outstanding database migrations. ([\#11346](https://github.com/matrix-org/synapse/issues/11346))
- Fix a bug introduced in 1.47.0rc1 which prevented the 'remove deleted devices from `device_inbox` column' background process from running when updating from a recent Synapse version. ([\#11303](https://github.com/matrix-org/synapse/issues/11303), [\#11353](https://github.com/matrix-org/synapse/issues/11353))
|
| |
| |
| |
| |
| | |
(#11353)
Co-authored-by: reivilibre <oliverw@matrix.org>
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
(#11280)
* remove unused tables room_stats_historical and user_stats_historical
* update changelog number
* Bump schema compat version comment
* make linter happy
* Update comment to give more info
Co-authored-by: reivilibre <oliverw@matrix.org>
Co-authored-by: reivilibre <oliverw@matrix.org>
|
|/ |
|
|
|
|
| |
This should speed up startup times and generally increase performance of
groups.
|
| |
|
|
|
|
| |
#10969 was merged after 1.46.0rc1 was cut and will be included
in v1.47.0rc1 instead.
|
|
|
| |
Fixes: #9346
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(MSC2716) (#10975)
Resolve and share `state_groups` for all historical events in batch. This also helps for showing the appropriate avatar/displayname in Element and will work whenever `/messages` has one of the historical messages as the first message in the batch.
This does have the flaw where if you just insert a single historical event somewhere, it probably won't resolve the state correctly from `/messages` or `/context` since it will grab a non historical event above or below with resolved state which never included the historical state back then. For the same reasions, this also does not work in Element between the transition from actual messages to historical messages. In the Gitter case, this isn't really a problem since all of the historical messages are in one big lump at the beginning of the room.
For a future iteration, might be good to look at `/messages` and `/context` to additionally add the `state` for any historical messages in that batch.
---
How are the `state_groups` shared? To illustrate the `state_group` sharing, see this example:
**Before** (new `state_group` for every event 😬, very inefficient):
```
# Tests from https://github.com/matrix-org/complement/pull/206
$ COMPLEMENT_ALWAYS_PRINT_SERVER_LOGS=1 COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh TestBackfillingHistory/parallel/should_resolve_member_state_events_for_historical_events
create_new_client_event m.room.member event=$_JXfwUDIWS6xKGG4SmZXjSFrizhARM7QblhATVWWUcA state_group=None
create_new_client_event org.matrix.msc2716.insertion event=$1ZBfmBKEjg94d-vGYymKrVYeghwBOuGJ3wubU1-I9y0 state_group=9
create_new_client_event org.matrix.msc2716.insertion event=$Mq2JvRetTyclPuozRI682SAjYp3GqRuPc8_cH5-ezPY state_group=10
create_new_client_event m.room.message event=$MfmY4rBQkxrIp8jVwVMTJ4PKnxSigpG9E2cn7S0AtTo state_group=11
create_new_client_event m.room.message event=$uYOv6V8wiF7xHwOMt-60d1AoOIbqLgrDLz6ZIQDdWUI state_group=12
create_new_client_event m.room.message event=$PAbkJRMxb0bX4A6av463faiAhxkE3FEObM1xB4D0UG4 state_group=13
create_new_client_event org.matrix.msc2716.batch event=$Oy_S7AWN7rJQe_MYwGPEy6RtbYklrI-tAhmfiLrCaKI state_group=14
```
**After** (all events in batch sharing `state_group=10`) (the base insertion event has `state_group=8` which matches the `prev_event` we're inserting next to):
```
# Tests from https://github.com/matrix-org/complement/pull/206
$ COMPLEMENT_ALWAYS_PRINT_SERVER_LOGS=1 COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh TestBackfillingHistory/parallel/should_resolve_member_state_events_for_historical_events
create_new_client_event m.room.member event=$PWomJ8PwENYEYuVNoG30gqtybuQQSZ55eldBUSs0i0U state_group=None
create_new_client_event org.matrix.msc2716.insertion event=$e_mCU7Eah9ABF6nQU7lu4E1RxIWccNF05AKaTT5m3lw state_group=9
create_new_client_event org.matrix.msc2716.insertion event=$ui7A3_GdXIcJq0C8GpyrF8X7B3DTjMd_WGCjogax7xU state_group=10
create_new_client_event m.room.message event=$EnTIM5rEGVezQJiYl62uFBl6kJ7B-sMxWqe2D_4FX1I state_group=10
create_new_client_event m.room.message event=$LGx5jGONnBPuNhAuZqHeEoXChd9ryVkuTZatGisOPjk state_group=10
create_new_client_event m.room.message event=$wW0zwoN50lbLu1KoKbybVMxLbKUj7GV_olozIc5i3M0 state_group=10
create_new_client_event org.matrix.msc2716.batch event=$5ZB6dtzqFBCEuMRgpkU201Qhx3WtXZGTz_YgldL6JrQ state_group=10
```
|
|
|
|
|
| |
Before Synapse 1.31 (#9411), we relied on `outlier` being stored in the
`internal_metadata` column. We can now assume nobody will roll back their
deployment that far and drop the legacy support.
|
|
|
|
|
| |
As pointed out by @richvdh, https://github.com/matrix-org/synapse/pull/10838#discussion_r715424244
Retroactively summarize `61` - `64`
|
|
|
|
|
|
|
|
| |
This avoids the overhead of searching through the various
configuration classes by directly referencing the class that
the attributes are in.
It also improves type hints since mypy can now resolve the
types of the configuration variables.
|
|
|
|
|
|
|
|
| |
endpoint (#10838)
See https://github.com/matrix-org/matrix-doc/pull/2716#discussion_r684574497
Dropping support for older MSC2716 room versions so we don't have to worry about
supporting both chunk and batch events.
|
|
|
|
| |
Instead of proxying through the magic getter of the RootConfig
object. This should be more performant (and is more explicit).
|
|
|
| |
Signed-off-by: Sean Quah <seanq@element.io>
|
|
|
|
|
|
| |
Part of https://github.com/matrix-org/synapse/pull/10566
- Fill in creator whenever we insert into the rooms table
- Add background update to backfill any missing creator values
|
|
|
| |
This was erroneously put under schema version 62 instead of 63.
|
|
|
|
|
| |
When a user deletes an email from their account it will
now also remove all pushers for that email and that user
(even if these pushers were created by a different client)
|
|
|
| |
I found this easy to miss (and evidently, it looks like it was missed for schema version 62).
|
| |
|
|
|
|
|
| |
Signed-off-by: Callum Brown <callum@calcuode.com>
This is part of my GSoC project implementing [MSC3231](https://github.com/matrix-org/matrix-doc/pull/3231).
|
|
|
| |
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Make historical messages available to federated servers
Part of MSC2716: https://github.com/matrix-org/matrix-doc/pull/2716
Follow-up to https://github.com/matrix-org/synapse/pull/9247
* Debug message not available on federation
* Add base starting insertion point when no chunk ID is provided
* Fix messages from multiple senders in historical chunk
Follow-up to https://github.com/matrix-org/synapse/pull/9247
Part of MSC2716: https://github.com/matrix-org/matrix-doc/pull/2716
---
Previously, Synapse would throw a 403,
`Cannot force another user to join.`,
because we were trying to use `?user_id` from a single virtual user
which did not match with messages from other users in the chunk.
* Remove debug lines
* Messing with selecting insertion event extremeties
* Move db schema change to new version
* Add more better comments
* Make a fake requester with just what we need
See https://github.com/matrix-org/synapse/pull/10276#discussion_r660999080
* Store insertion events in table
* Make base insertion event float off on its own
See https://github.com/matrix-org/synapse/pull/10250#issuecomment-875711889
Conflicts:
synapse/rest/client/v1/room.py
* Validate that the app service can actually control the given user
See https://github.com/matrix-org/synapse/pull/10276#issuecomment-876316455
Conflicts:
synapse/rest/client/v1/room.py
* Add some better comments on what we're trying to check for
* Continue debugging
* Share validation logic
* Add inserted historical messages to /backfill response
* Remove debug sql queries
* Some marker event implemntation trials
* Clean up PR
* Rename insertion_event_id to just event_id
* Add some better sql comments
* More accurate description
* Add changelog
* Make it clear what MSC the change is part of
* Add more detail on which insertion event came through
* Address review and improve sql queries
* Only use event_id as unique constraint
* Fix test case where insertion event is already in the normal DAG
* Remove debug changes
* Add support for MSC2716 marker events
* Process markers when we receive it over federation
* WIP: make hs2 backfill historical messages after marker event
* hs2 to better ask for insertion event extremity
But running into the `sqlite3.IntegrityError: NOT NULL constraint failed: event_to_state_groups.state_group`
error
* Add insertion_event_extremities table
* Switch to chunk events so we can auth via power_levels
Previously, we were using `content.chunk_id` to connect one
chunk to another. But these events can be from any `sender`
and we can't tell who should be able to send historical events.
We know we only want the application service to do it but these
events have the sender of a real historical message, not the
application service user ID as the sender. Other federated homeservers
also have no indicator which senders are an application service on
the originating homeserver.
So we want to auth all of the MSC2716 events via power_levels
and have them be sent by the application service with proper
PL levels in the room.
* Switch to chunk events for federation
* Add unstable room version to support new historical PL
* Messy: Fix undefined state_group for federated historical events
```
2021-07-13 02:27:57,810 - synapse.handlers.federation - 1248 - ERROR - GET-4 - Failed to backfill from hs1 because NOT NULL constraint failed: event_to_state_groups.state_group
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/federation.py", line 1216, in try_backfill
await self.backfill(
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/federation.py", line 1035, in backfill
await self._auth_and_persist_event(dest, event, context, backfilled=True)
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/federation.py", line 2222, in _auth_and_persist_event
await self._run_push_actions_and_persist_event(event, context, backfilled)
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/federation.py", line 2244, in _run_push_actions_and_persist_event
await self.persist_events_and_notify(
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/federation.py", line 3290, in persist_events_and_notify
events, max_stream_token = await self.storage.persistence.persist_events(
File "/usr/local/lib/python3.8/site-packages/synapse/logging/opentracing.py", line 774, in _trace_inner
return await func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/synapse/storage/persist_events.py", line 320, in persist_events
ret_vals = await yieldable_gather_results(enqueue, partitioned.items())
File "/usr/local/lib/python3.8/site-packages/synapse/storage/persist_events.py", line 237, in handle_queue_loop
ret = await self._per_item_callback(
File "/usr/local/lib/python3.8/site-packages/synapse/storage/persist_events.py", line 577, in _persist_event_batch
await self.persist_events_store._persist_events_and_state_updates(
File "/usr/local/lib/python3.8/site-packages/synapse/storage/databases/main/events.py", line 176, in _persist_events_and_state_updates
await self.db_pool.runInteraction(
File "/usr/local/lib/python3.8/site-packages/synapse/storage/database.py", line 681, in runInteraction
result = await self.runWithConnection(
File "/usr/local/lib/python3.8/site-packages/synapse/storage/database.py", line 770, in runWithConnection
return await make_deferred_yieldable(
File "/usr/local/lib/python3.8/site-packages/twisted/python/threadpool.py", line 238, in inContext
result = inContext.theWork() # type: ignore[attr-defined]
File "/usr/local/lib/python3.8/site-packages/twisted/python/threadpool.py", line 254, in <lambda>
inContext.theWork = lambda: context.call( # type: ignore[attr-defined]
File "/usr/local/lib/python3.8/site-packages/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/usr/local/lib/python3.8/site-packages/twisted/python/context.py", line 83, in callWithContext
return func(*args, **kw)
File "/usr/local/lib/python3.8/site-packages/twisted/enterprise/adbapi.py", line 293, in _runWithConnection
compat.reraise(excValue, excTraceback)
File "/usr/local/lib/python3.8/site-packages/twisted/python/deprecate.py", line 298, in deprecatedFunction
return function(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/twisted/python/compat.py", line 403, in reraise
raise exception.with_traceback(traceback)
File "/usr/local/lib/python3.8/site-packages/twisted/enterprise/adbapi.py", line 284, in _runWithConnection
result = func(conn, *args, **kw)
File "/usr/local/lib/python3.8/site-packages/synapse/storage/database.py", line 765, in inner_func
return func(db_conn, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/synapse/storage/database.py", line 549, in new_transaction
r = func(cursor, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/synapse/logging/utils.py", line 69, in wrapped
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/synapse/storage/databases/main/events.py", line 385, in _persist_events_txn
self._store_event_state_mappings_txn(txn, events_and_contexts)
File "/usr/local/lib/python3.8/site-packages/synapse/storage/databases/main/events.py", line 2065, in _store_event_state_mappings_txn
self.db_pool.simple_insert_many_txn(
File "/usr/local/lib/python3.8/site-packages/synapse/storage/database.py", line 923, in simple_insert_many_txn
txn.execute_batch(sql, vals)
File "/usr/local/lib/python3.8/site-packages/synapse/storage/database.py", line 280, in execute_batch
self.executemany(sql, args)
File "/usr/local/lib/python3.8/site-packages/synapse/storage/database.py", line 300, in executemany
self._do_execute(self.txn.executemany, sql, *args)
File "/usr/local/lib/python3.8/site-packages/synapse/storage/database.py", line 330, in _do_execute
return func(sql, *args)
sqlite3.IntegrityError: NOT NULL constraint failed: event_to_state_groups.state_group
```
* Revert "Messy: Fix undefined state_group for federated historical events"
This reverts commit 187ab28611546321e02770944c86f30ee2bc742a.
* Fix federated events being rejected for no state_groups
Add fix from https://github.com/matrix-org/synapse/pull/10439
until it merges.
* Adapting to experimental room version
* Some log cleanup
* Add better comments around extremity fetching code and why
* Rename to be more accurate to what the function returns
* Add changelog
* Ignore rejected events
* Use simplified upsert
* Add Erik's explanation of extra event checks
See https://github.com/matrix-org/synapse/pull/10498#discussion_r680880332
* Clarify that the depth is not directly correlated to the backwards extremity that we return
See https://github.com/matrix-org/synapse/pull/10498#discussion_r681725404
* lock only matters for sqlite
See https://github.com/matrix-org/synapse/pull/10498#discussion_r681728061
* Move new SQL changes to its own delta file
* Clean up upsert docstring
* Bump database schema version (62)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
scrollback history (MSC2716) (#10245)
* Make historical messages available to federated servers
Part of MSC2716: https://github.com/matrix-org/matrix-doc/pull/2716
Follow-up to https://github.com/matrix-org/synapse/pull/9247
* Debug message not available on federation
* Add base starting insertion point when no chunk ID is provided
* Fix messages from multiple senders in historical chunk
Follow-up to https://github.com/matrix-org/synapse/pull/9247
Part of MSC2716: https://github.com/matrix-org/matrix-doc/pull/2716
---
Previously, Synapse would throw a 403,
`Cannot force another user to join.`,
because we were trying to use `?user_id` from a single virtual user
which did not match with messages from other users in the chunk.
* Remove debug lines
* Messing with selecting insertion event extremeties
* Move db schema change to new version
* Add more better comments
* Make a fake requester with just what we need
See https://github.com/matrix-org/synapse/pull/10276#discussion_r660999080
* Store insertion events in table
* Make base insertion event float off on its own
See https://github.com/matrix-org/synapse/pull/10250#issuecomment-875711889
Conflicts:
synapse/rest/client/v1/room.py
* Validate that the app service can actually control the given user
See https://github.com/matrix-org/synapse/pull/10276#issuecomment-876316455
Conflicts:
synapse/rest/client/v1/room.py
* Add some better comments on what we're trying to check for
* Continue debugging
* Share validation logic
* Add inserted historical messages to /backfill response
* Remove debug sql queries
* Some marker event implemntation trials
* Clean up PR
* Rename insertion_event_id to just event_id
* Add some better sql comments
* More accurate description
* Add changelog
* Make it clear what MSC the change is part of
* Add more detail on which insertion event came through
* Address review and improve sql queries
* Only use event_id as unique constraint
* Fix test case where insertion event is already in the normal DAG
* Remove debug changes
* Switch to chunk events so we can auth via power_levels
Previously, we were using `content.chunk_id` to connect one
chunk to another. But these events can be from any `sender`
and we can't tell who should be able to send historical events.
We know we only want the application service to do it but these
events have the sender of a real historical message, not the
application service user ID as the sender. Other federated homeservers
also have no indicator which senders are an application service on
the originating homeserver.
So we want to auth all of the MSC2716 events via power_levels
and have them be sent by the application service with proper
PL levels in the room.
* Switch to chunk events for federation
* Add unstable room version to support new historical PL
* Fix federated events being rejected for no state_groups
Add fix from https://github.com/matrix-org/synapse/pull/10439
until it merges.
* Only connect base insertion event to prev_event_ids
Per discussion with @erikjohnston,
https://matrix.to/#/!UytJQHLQYfvYWsGrGY:jki.re/$12bTUiObDFdHLAYtT7E-BvYRp3k_xv8w0dUQHibasJk?via=jki.re&via=matrix.org
* Make it possible to get the room_version with txn
* Allow but ignore historical events in unsupported room version
See https://github.com/matrix-org/synapse/pull/10245#discussion_r675592489
We can't reject historical events on unsupported room versions because homeservers without knowledge of MSC2716 or the new room version don't reject historical events either.
Since we can't rely on the auth check here to stop historical events on unsupported room versions, I've added some additional checks in the processing/persisting code (`synapse/storage/databases/main/events.py` -> `_handle_insertion_event` and `_handle_chunk_event`). I've had to do some refactoring so there is method to fetch the room version by `txn`.
* Move to unique index syntax
See https://github.com/matrix-org/synapse/pull/10245#discussion_r675638509
* High-level document how the insertion->chunk lookup works
* Remove create_event fallback for room_versions
See https://github.com/matrix-org/synapse/pull/10245/files#r677641879
* Use updated method name
|
|
|
|
|
|
|
|
| |
The postgres statistics collector sometimes massively underestimates the
number of distinct state groups are in the `state_groups_state`, which
can cause postgres to use table scans for queries for multiple state
groups.
We fix this by manually setting `n_distinct` on the column.
|
|
|
|
|
| |
while I'm dealing with INTEGERs and BIGINTs, let's replace room_depth.min_depth
with a BIGINT.
|
| |
|
|
|
| |
Fixes #9602
|
|
|
|
|
| |
this was a typo introduced in #10282. We don't want to end up doing the
`replace_stream_ordering_column` update after anything that comes up in
migration 60/03.
|
| |
|
|
|
|
| |
We need to rebuild *all* of the indexes that use the current `stream_ordering`
column.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes #9490
This will break a couple of SyTest that are expecting failures to be added to the response of a federation /send, which obviously doesn't happen now that things are asynchronous.
Two drawbacks:
Currently there is no logic to handle any events left in the staging area after restart, and so they'll only be handled on the next incoming event in that room. That can be fixed separately.
We now only process one event per room at a time. This can be fixed up further down the line.
|
| |
| |
| | |
This adds a simple best effort locking mechanism that works cross workers.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Move background update names out to a separate class
`EventsBackgroundUpdatesStore` gets inherited and we don't really want to
further pollute the namespace.
* Migrate stream_ordering to a bigint
* changelog
|
|/
|
|
|
|
|
|
|
|
| |
This implements refresh tokens, as defined by MSC2918
This MSC has been implemented client side in Hydrogen Web: vector-im/hydrogen-web#235
The basics of the MSC works: requesting refresh tokens on login, having the access tokens expire, and using the refresh token to get a new one.
Signed-off-by: Quentin Gliech <quentingliech@gmail.com>
|
|
|
| |
Introduced in #6739
|
|
|
| |
This is essentially an implementation of the proposal made at https://hackmd.io/@richvdh/BJYXQMQHO, though the details have ended up looking slightly different.
|
|
|
|
|
|
| |
This PR aims to implement the knock feature as proposed in https://github.com/matrix-org/matrix-doc/pull/2403
Signed-off-by: Sorunome mail@sorunome.de
Signed-off-by: Andrew Morgan andrewm@element.io
|
|
|
|
| |
to them, instead of something in-memory (#9823)
|
|
|
|
|
| |
The hope here is that by moving all the schema files into synapse/storage/schema, it gets a bit easier for newcomers to navigate.
It certainly got easier for me to write a helpful README. There's more to do on that front, but I'll follow up with other PRs for that.
|
| |
|
|
|
|
| |
They were put in the global schema delta directory due to a bad merge.
|
|\
| |
| |
| | |
erikj/refactor_stores
|
| |\ |
|
| |\ \ |
|
| |\ \ \ |
|
| | |\ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* allow devices to be marked as "hidden"
This is a prerequisite for cross-signing, as it allows us to create other things
that live within the device namespace, so they can be used for signatures.
|
| | | | | | |
|
| | | | | | |
|
| |\ \ \ \ \ |
|
| | |\| | | | |
|
| |\| | | | | |
|
| | | | | | | |
|
| |/ / / / / |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This is a prerequisite for cross-signing, as it allows us to create other things
that live within the device namespace, so they can be used for signatures.
|
| |_|_|_|/
|/| | | |
| | | | |
| | | | |
| | | | | |
This is in preparation for having multiple data stores that offer
different functionality, e.g. splitting out state or event storage.
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | | |
erikj/disable_sql_bytes
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
you can't plausibly ALTER TABLE in sqlite, so we create the new table with the
right schema to start with.
|
| |\ \ \ \ \
| | | | | | |
| | | | | | | |
Fix inserting bytes as text in `censor_redactions`
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Use room_stats and room_state for room directory search
|
| |\ \ \ \ \ \ |
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
These tables are unused since #5893 (as amended by #6047), so we can now drop
them.
Fixes #6048.
|
| |_|/ / / / /
|/| | | | | | |
|
| |/ / / / /
|/| | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
We have set the max retry interval to a value larger than a postgres or
sqlite int can hold, which caused exceptions when updating the
destinations table.
To fix postgres we need to change the column to a bigint, and for sqlite
we lower the max interval to 2**62 (which is still incredibly long).
|
|/ / / / /
| | | | |
| | | | |
| | | | |
| | | | | |
This will allow us to efficiently search for uncensored redactions in
the DB before a given time.
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | | |
erikj/cleanup_user_ips
|
| |\ \ \ \ \ |
|
| |\ \ \ \ \ \ |
|
| | |_|_|/ / /
| |/| | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
We want to assign unique mxids to saml users based on an incrementing
suffix. For that to work, we need to record the allocated mxid in a separate
table.
|
| | | | | | | |
|
| |_|/ / / /
|/| | | | |
| | | | | |
| | | | | |
| | | | | | |
This allows us to purge old user_ips entries without having to preserve
the latest last seen info for active devices.
|
| |/ / / /
|/| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This is a partial revert of #5893. The problem is that if we drop these tables
in the same release as removing the code that writes to them, it prevents users
users from being able to roll back to a previous release.
So let's leave the tables in place for now, and remember to drop them in a
subsequent release.
(Note that these tables haven't been *read* for *years*, so any missing rows
resulting from a temporary upgrade to vNext won't cause a problem.)
|
| | | | |
| | | | |
| | | | |
| | | | | |
Track the time that a server started failing at, for general analysis purposes.
|
|\ \ \ \ \
| | |_|_|/
| |/| | |
| | | | | |
erikj/censor_redactions
|
| | |_|/
| |/| |
| | | | |
Previously the stats were not being correctly populated.
|
|/ / / |
|
| | |
| | |
| | |
| | |
| | | |
Propagate opentracing contexts through EDUs
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
|
| | | |
|
| | |
| | |
| | | |
These tables are never used, so we may as well drop them.
|
|/ / |
|
| | |
|
| |
| |
| |
| |
| |
| | |
Turns out not all rooms are in `rooms`, so lets fetch the room list from
`current_state_events`. We move the delta file to force it to be run
again.
|
| | |
|
|/
|
|
|
|
| |
This will allow us to efficiently filter out rooms that have been
forgotten in other queries without having to join against the
`room_memberships` table.
|
| |
|
| |
|
|
|
|
|
| |
It turns out that doing a join is surprisingly expensive for the DB to
do when room_membership table is larger than the disk cache.
|
|
|
|
| |
Record how long an access token is valid for, and raise a soft-logout once it
expires.
|
| |
|
| |
|
|\
| |
| | |
Make a full SQL schema
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
| |
identity server (#5377)
Sends password reset emails from the homeserver instead of proxying to the identity server. This is now the default behaviour for security reasons. If you wish to continue proxying password reset requests to the identity server you must now enable the email.trust_identity_server_for_password_resets option.
This PR is a culmination of 3 smaller PRs which have each been separately reviewed:
* #5308
* #5345
* #5368
|
|\
| |
| | |
Speed up room stats background update
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We have to do this by re-inserting a background update and recreating
tables, as the tables only get created during a background update and
will later be deleted.
We also make sure that we remove any entries that should have been
removed but weren't due to a race that has been fixed in a previous
commit.
|
|/ |
|
| |
|
|
|
|
|
| |
Due to #5269 we may have extremities in our DB that we shouldn't have,
so lets add a cleanup task such to remove those.
|
|\
| |
| | |
Fix schema update for account validity
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
This is a first step to checking that the key is valid at the required moment.
The idea here is that, rather than passing VerifyKey objects in and out of the
storage layer, we instead pass FetchKeyResult objects, which simply wrap the
VerifyKey and add a valid_until_ts field.
|
| | |
|
|/ |
|
|\
| |
| | |
Send out emails with links to extend an account's validity period
|
| | |
|
|\ \
| | |
| | | |
Fix schema upgrade when dropping tables
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We need to drop tables in the correct order due to foreign table
constraints (on `application_services`), otherwise the DROP TABLE
command will fail.
Introduced in #4992.
|
|\ \ \
| |/ /
|/| /
| |/ |
Add time-based account expiration
|
| | |
|
| |
| |
| |
| | |
These have been unused since #4120, and with the demise of perspectives, it is
unlikely that they will ever be used again.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Tables dropped:
* application_services,
* application_services_regex,
* transaction_id_to_pdu,
* stats_reporting
* current_state_resets
* event_content_hashes
* event_destinations
* event_edge_hashes
* event_signatures
* feedback
* room_hosts
* state_forward_extremities
|
| |
| |
| | |
Remove presence list support as per MSC 1819
|
| | |
|
| |
| |
| |
| |
| |
| | |
We assume, as we did before, that users bound their threepid to one of
the trusted identity servers. So we simply fill the new table with all
threepids in `user_threepids` joined with the trusted identity servers.
|
|/
|
|
|
| |
This will then be used to know which IS to default to when unbinding the
threepid.
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
| | |
|
|/ |
|
| |
|
|
|
|
|
|
|
| |
Due to the table locks taken out by the naive upsert, the table
statistics may be out of date. During deduplication it is important that
the correct index is used as otherwise a full table scan may be
incorrectly used, which can end up thrashing the database badly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently we only have the one event format version defined, but this
adds the necessary infrastructure to persist and fetch the format
versions alongside the events.
We specify the format version rather than the room version as:
1. We don't necessarily know the room version, existing events may be
either v1 or v2.
2. We'd need to be careful to prevent/handle correctly if different
events in the same room reported to be of different versions, which
sounds annoying.
|
| |
|
|
|
|
|
|
| |
Allow for the creation of a support user.
A support user can access the server, join rooms, interact with other users, but does not appear in the user directory nor does it contribute to monthly active user limits.
|
|
|
|
| |
Signed-off-by: Aaron Raimist <aaron@raim.ist>
|
|\
| |
| |
| | |
dbkr/e2e_backup_versions_are_numbers
|
| |
| |
| |
| |
| | |
The indexes on device_lists_remote_extremeties can be unique, and they
therefore should, to ensure that the db remains consistent.
|
| |\
| | |
| | |
| | | |
erikj/purge_state_groups
|
| | |
| | |
| | |
| | |
| | | |
This is needed to efficiently check for unreferenced state groups during
purge.
|
| | | |
|
| |/
|/|
| |
| |
| | |
We were doing max(version) which does not do what we wanted
on a column of type TEXT.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Since we don't actually delete the keys, just mark the versions
as deleted in the db rather than actually deleting them, then we
won't reuse versions.
Fixes https://github.com/vector-im/riot-web/issues/7448
|
| |
| |
| |
| |
| |
| |
| | |
Continues from uhoreg's branch
This just fixed the errcode on /room_keys/version if no backup and
updates the schema delta to be on the latest so it gets run
|
|\| |
|
| | |
|
| | |
|
| |
| |
| |
| | |
This reverts commit ec716a35b219d147dee51733b55573952799a549.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|/ |
|
|
|
|
|
| |
There's no real point in ever making the column non-nullable, and doing so
breaks the sytests.
|
|
|
|
|
|
| |
This field is no longer read from, so we should stop populating it. Once we're
happy that this doesn't break everything, and a rollback is unlikely, we can
think about dropping the column.
|
| |
|
| |
|
|
|
|
|
|
| |
matrix-org/rav/erasure_visibility""
This reverts commit 1d009013b3c3e814177afc59f066e02a202b21cd.
|
|
|
|
|
| |
This reverts commit ce0d911156b355c5bf452120bfb08653dad96497, reversing
changes made to b4a5d767a94f1680d07edfd583aae54ce422573e.
|
|
|
|
| |
to store which users have been erased
|
| |
|
| |
|
|
|
|
|
| |
When a user first syncs, we will send them a server notice asking them to
consent to the privacy policy if they have not already done so.
|
|\
| |
| | |
user visit data
|
| |\
| | |
| | |
| | | |
cohort_analytics
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
ConsentResource to gather policy consent from users
|
| | |/
| |/|
| | |
| | |
| | | |
Hopefully there are enough comments and docs in this that it makes sense on its
own.
|
|/ / |
|
|\ \
| | |
| | | |
remove duplicates from groups tables
|
| | | |
|
| | | |
|
| | | |
|
| |/
| |
| |
| |
| | |
and rename inconsistently named indexes.
Based on https://github.com/matrix-org/synapse/pull/3128 - thanks @vurpo\!
|
|/
|
|
|
|
| |
plus a bonus next()
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
|\
| |
| | |
Add joinability for groups
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The API is now under
/groups/$group_id/setting/m.join_policy
and expects a JSON blob of the shape
```json
{
"m.join_policy": {
"type": "invite"
}
}
```
where "invite" could alternatively be "open".
|
| | |
|
| | |
|
| | |
|
| | |
|
|\ \
| | |
| | | |
R30 stats
|
| |/ |
|
|/
|
|
| |
Let's use simplejson rather than json, for consistency.
|
|\ |
|
| | |
|
| | |
|
| | |
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* Split state group persist into seperate storage func
* Add per database engine code for state group id gen
* Move store_state_group to StateReadStore
This allows other workers to use it, and so resolve state.
* Hook up store_state_group
* Fix tests
* Rename _store_mult_state_groups_txn
* Rename StateGroupReadStore
* Remove redundant _have_persisted_state_group_txn
* Update comments
* Comment compute_event_context
* Set start val for state_group_id_seq
... otherwise we try to recreate old state groups
* Update comments
* Don't store state for outliers
* Update comment
* Update docstring as state groups are ints
|
| | | |
|
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We're up to schema v47 on develop now, so this will have to go in there to have
an effect.
This might cause an error if somebody has already run it in the v46 guise, and
runs it again in the v47 guise, because it will cause a duplicate entry in the
bbackground_updates table. On the other hand, the entry is removed once it is
complete, and it is unlikely that anyone other than matrix.org has run it on
v46. The update itself is harmless to re-run because it deliberately copes with
the index already existing.
|
|/ |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Create the url_cache index on local_media_repository as a background update, so
that we can detect whether we are on sqlite or not and create a partial or
complete index accordingly.
To avoid running the cleanup job before we have built the index, add a bailout
which will defer the cleanup if the bg updates are still running.
Fixes https://github.com/matrix-org/synapse/issues/2572.
|
|/ |
|
|
|
|
|
| |
* replace the upsert into deleted_pushers with an insert
* no need to lock for upsert on pusher_throttle
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Provide an interface by which password auth providers can register db schema
files to be run at startup
|
|
|
|
| |
Adding a column with non-constant default not possible in sqlite3
|
| |
|
|
|
|
|
|
| |
Prevent group API access to non-members for private groups
Also make all the group code paths consistent with `requester_user_id` always being the User ID of the requesting user.
|
|
|
|
| |
what could possibly go wrong
|
| |
|
|\ |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|