| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
| |
Optionally sends typing, presence, and read receipt information to appservices.
|
|\
| |
| | |
Rename Cache to DeferredCache, and related changes
|
| | |
|
| | |
|
| |
| |
| | |
Include user agent in user daily visits table.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Autocommit means that we don't wrap the functions in transactions, and instead get executed directly. Introduced in #8456. This will help:
1. reduce the number of `could not serialize access due to concurrent delete` errors that we see (though there are a few functions that often cause serialization errors that we don't fix here);
2. improve the DB performance, as it no longer needs to deal with the overhead of `REPEATABLE READ` isolation levels; and
3. improve wall clock speed of these functions, as we no longer need to send `BEGIN` and `COMMIT` to the DB.
Some notes about the differences between autocommit mode and our default `REPEATABLE READ` transactions:
1. Currently `autocommit` only applies when using PostgreSQL, and is ignored when using SQLite (due to silliness with [Twisted DB classes](https://twistedmatrix.com/trac/ticket/9998)).
2. Autocommit functions may get retried on error, which means they can get applied *twice* (or more) to the DB (since they are not in a transaction the previous call would not get rolled back). This means that the functions need to be idempotent (or otherwise not care about being called multiple times). Read queries, simple deletes, and updates/upserts that replace rows (rather than generating new values from existing rows) are all idempotent.
3. Autocommit functions no longer get executed in [`REPEATABLE READ`](https://www.postgresql.org/docs/current/transaction-iso.html) isolation level, and so data can change queries, which is fine for single statement queries.
|
| |
| |
| |
| |
| |
| |
| |
| | |
We asserted that the IDs returned by postgres sequence was greater than
any we had seen, however this is technically racey as we may update the
current positions out of order.
We now assert that the sequences are correct on startup, so the
assertion is no longer really required, so we remove them.
|
|/
|
|
|
| |
* Make sure a retention policy is a state event
* Changelog
|
| |
|
|
|
|
|
| |
(#8476)
Should fix #3365.
|
|
|
|
|
| |
Currently background proccesses stream the events stream use the "minimum persisted position" (i.e. `get_current_token()`) rather than the vector clock style tokens. This is broadly fine as it doesn't matter if the background processes lag a small amount. However, in extreme cases (i.e. SyTests) where we only write to one event persister the background processes will never make progress.
This PR changes it so that the `MultiWriterIDGenerator` keeps the current position of a given instance as up to date as possible (i.e using the latest token it sees if its not in the process of persisting anything), and then periodically announces that over replication. This then allows the "minimum persisted position" to advance, albeit with a small lag.
|
|
|
|
|
| |
When pulling events out of the DB to send over replication we were not
filtering by instance name, and so we were sending events for other
instances.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
We call `_update_stream_positions_table_txn` a lot, which is an UPSERT
that can conflict in `REPEATABLE READ` isolation level. Instead of doing
a transaction consisting of a single query we may as well run it outside
of a transaction.
|
|
|
|
|
|
|
|
|
|
| |
Currently when using multiple event persisters we (in the worst case) don't tell clients about events until all event persisters have persisted new events after the original event. This is a suboptimal, especially if one of the event persisters goes down.
To handle this, we encode the position of each event persister in the room tokens so that we can send events to clients immediately. To reduce the size of the token we do two things:
1. We create a unique immutable persistent mapping between instance names and a generated small integer ID, which we can encode in the tokens instead of the instance name; and
2. We encode the "persisted upto position" of the room token and then only explicitly include instances that have positions strictly greater than that.
The new tokens look something like: `m3478~1.3488~2.3489`, where the first number is the min position, and the subsequent `-` separated pairs are the instance ID to positions map. (We use `.` and `~` as separators as they're URL safe and not already used by `StreamToken`).
|
| |
|
|
|
|
| |
This allows a user to store an offline device on the server and
then restore it at a subsequent login.
|
| |
|
|
|
|
|
|
|
|
| |
There's no need for it to be in the dict as well as the events table. Instead,
we store it in a separate attribute in the EventInternalMetadata object, and
populate that on load.
This means that we can rely on it being correctly populated for any event which
has been persited to the database.
|
|
|
|
| |
This fixes a bug where `m.ignored_user_list` was assumed to be a dict,
leading to odd behavior for users who set it to something else.
|
|
|
|
|
| |
This is so we can tell what is going on when things are taking a while to start up.
The main change here is to ensure that transactions that are created during startup get correctly logged like normal transactions.
|
| |
|
| |
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.21.0rc2 (2020-10-02)
==============================
Features
--------
- Convert additional templates from inline HTML to Jinja2 templates. ([\#8444](https://github.com/matrix-org/synapse/issues/8444))
Bugfixes
--------
- Fix a regression in v1.21.0rc1 which broke thumbnails of remote media. ([\#8438](https://github.com/matrix-org/synapse/issues/8438))
- Do not expose the experimental `uk.half-shot.msc2778.login.application_service` flow in the login API, which caused a compatibility problem with Element iOS. ([\#8440](https://github.com/matrix-org/synapse/issues/8440))
- Fix malformed log line in new federation "catch up" logic. ([\#8442](https://github.com/matrix-org/synapse/issues/8442))
- Fix DB query on startup for negative streams which caused long start up times. Introduced in [\#8374](https://github.com/matrix-org/synapse/issues/8374). ([\#8447](https://github.com/matrix-org/synapse/issues/8447))
|
| |
| |
| |
| |
| |
| |
| |
| | |
For negative streams we have to negate the internal stream ID before
querying the DB.
The effect of this bug was to query far too many rows, slowing start up
time, but we would correctly filter the results afterwards so there was
no ill effect.
|
|/ |
|
|
|
| |
The idea is that in future tokens will encode a mapping of instance to position. However, we don't want to include the full instance name in the string representation, so instead we'll have a mapping between instance name and an immutable integer ID in the DB that we can use instead. We'll then do the lookup when we serialize/deserialize the token (we could alternatively pass around an `Instance` type that includes both the name and ID, but that turns out to be a lot more invasive).
|
|
|
|
|
| |
Hopefully, N(extremities) * N(state_events) is a more realistic approximation
to "how big a problem is this room?".
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was a bit unweildy for what I wanted: in particular, I wanted to assign
each measurement straight into a bucket, rather than storing an intermediate
Counter which didn't do any bucketing at all.
I've replaced it with something that is hopefully a bit easier to use.
(I'm not entirely sure what the difference between a HistogramMetricFamily and
a GaugeHistogramMetricFamily is, but given our counters can go down as well as
up the latter *sounds* more accurate?)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix table scan of events on worker startup.
This happened because we assumed "new" writers had an initial stream
position of 0, so the replication code tried to fetch all events written
by the instance between 0 and the current position.
Instead, set the initial position of new writers to the current
persisted up to position, on the assumption that new writers won't have
written anything before that point.
* Consider old writers coming back as "new".
Otherwise we'd try and fetch entries between the old stale token and the
current position, even though it won't have written any rows.
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
|
|
|
| |
For some reason, an apparently unrelated PR upset mypy about this module. Here are a number of little fixes.
|
|
|
| |
This is an attempt to fix #8403.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Co-authored-by: Benjamin Koch <bbbsnowball@gmail.com>
This adds configuration flags that will match a user to pre-existing users
when logging in via OpenID Connect. This is useful when switching to
an existing SSO system.
|
|
|
|
|
| |
Fixes #8395.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On startup `MultiWriteIdGenerator` fetches the maximum stream ID for
each instance from the table and uses that as its initial "current
position" for each writer. This is problematic as a) it involves either
a scan of events table or an index (neither of which is ideal), and b)
if rows are being persisted out of order elsewhere while the process
restarts then using the maximum stream ID is not correct. This could
theoretically lead to race conditions where e.g. events that are
persisted out of order are not sent down sync streams.
We fix this by creating a new table that tracks the current positions of
each writer to the stream, and update it each time we finish persisting
a new entry. This is a relatively small overhead when persisting events.
However for the cache invalidation stream this is a much bigger relative
overhead, so instead we note that for invalidation we don't actually
care about reliability over restarts (as there's no caches to
invalidate) and simply don't bother reading and writing to the new table
in that particular case.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea is to remove some of the places we pass around `int`, where it can represent one of two things:
1. the position of an event in the stream; or
2. a token that partitions the stream, used as part of the stream tokens.
The valid operations are then:
1. did a position happen before or after a token;
2. get all events that happened before or after a token; and
3. get all events between two tokens.
(Note that we don't want to allow other operations as we want to change the tokens to be vector clocks rather than simple ints)
|
| |
|
|
|
| |
This will allow us to hit the DB after we've finished using the generated stream ID.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When updating room_stats_state, we try to check for null bytes slipping
in to the
content for state events. It turns out we had added guest_access as a
field to
room_stats_state without including it in the null byte check.
Lo and behold, a null byte in a m.room.guest_access event then breaks
room_stats_state
updates.
This PR adds the check for guest_access. A further PR will improve this
function so that this hopefully does not happen again in future.
|
|
|
|
|
|
|
| |
Fixes: #8359
Trying to reactivate a user with the admin API (`PUT /_synapse/admin/v2/users/<user_name>`) causes an internal server error.
Seems to be a regression in #8033.
|
|
|
| |
Add an admin API to read entries of table `event_reports`. API: `GET /_synapse/admin/v1/event_reports`
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.20.0rc5 (2020-09-18)
==============================
In addition to the below, Synapse 1.20.0rc5 also includes the bug fix that was included in 1.19.3.
Features
--------
- Add flags to the `/versions` endpoint for whether new rooms default to using E2EE. ([\#8343](https://github.com/matrix-org/synapse/issues/8343))
Bugfixes
--------
- Fix rate limiting of federation `/send` requests. ([\#8342](https://github.com/matrix-org/synapse/issues/8342))
- Fix a longstanding bug where back pagination over federation could get stuck if it failed to handle a received event. ([\#8349](https://github.com/matrix-org/synapse/issues/8349))
Internal Changes
----------------
- Blacklist [MSC2753](https://github.com/matrix-org/matrix-doc/pull/2753) SyTests until it is implemented. ([\#8285](https://github.com/matrix-org/synapse/issues/8285))
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of just using the most recent extremities let's pick the
ones that will give us results that the pagination request cares about,
i.e. pick extremities only if they have a smaller depth than the
pagination token.
This is useful when we fail to backfill an extremity, as we no longer
get stuck requesting that same extremity repeatedly.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
* Fix _set_destination_retry_timings
This came about because the code assumed that retry_interval
could not be NULL — which has been challenged by catch-up.
|
| |
| |
| |
| |
| |
| |
| | |
This converts calls like super(Foo, self) -> super().
Generated with:
sed -i "" -Ee 's/super\([^\(]+\)/super()/g' **/*.py
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
slots use less memory (and attribute access is faster) while slightly
limiting the flexibility of the class attributes. This focuses on objects
which are instantiated "often" and for short periods of time.
|
| | |
|
| |
| |
| |
| |
| |
| | |
This is *not* ready for production yet. Caveats:
1. We should write some tests...
2. The stream token that we use for events can get stalled at the minimum position of all writers. This means that new events may not be processed and e.g. sent down sync streams if a writer isn't writing or is slow.
|
| | |
|
|\| |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
missing push rules in `.../actions` and `.../enabled` (#7796)
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This fixes an issue where different methods (crop/scale) overwrite each other.
This first tries the new path. If that fails and we are looking for a
remote thumbnail, it tries the old path. If that still isn't found, it
continues as normal.
This should probably be removed in the future, after some of the newer
thumbnails were generated with the new path on most deployments. Then
the overhead should be minimal if the other thumbnails need to be
regenerated.
Signed-off-by: Nicolas Werner <nicolas.werner@hotmail.de>
|
| |
| |
| | |
The intention here is to change `StreamToken.room_key` to be a `RoomStreamToken` in a future PR, but that is a big enough change without this refactoring too.
|
| |
| |
| |
| |
| | |
It did not correctly handle IDs finishing being persisted out of
order, resulting in the `current_position` lagging until new IDs are
persisted.
|
| |
| |
| |
| |
| |
| |
| | |
'populate_stats_process_rooms' again (#8243)
Fixes https://github.com/matrix-org/synapse/issues/8238
Alongside the delta file, some changes were also necessary to the codebase to remove references to the now defunct `populate_stats_process_rooms_2` background job. Thankfully the latter doesn't seem to have made it into any documentation yet :)
|
|\| |
|
| |
| |
| | |
This takes about 10 seconds in the best case; often more.
|
| |
| |
| |
| | |
Fix unread counts making sync fail if the value of the `unread_count`
column in `event_push_summary` is `None`.
|
| |
| |
| |
| | |
I'm hoping this will provide some pointers for debugging
https://github.com/matrix-org/synapse/issues/7968.
|
| | |
|
| |
| |
| | |
Fixes: #6467
|
|\| |
|
| |
| |
| |
| |
| | |
All the callers want this info in the same place, so let's reduce the
duplication by doing it here.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Fixup `ALTER TABLE` database queries
Make the new columns nullable, because doing otherwise can wedge a
server with a big database, as setting a default value rewrites the
table.
* Switch back to using the notifications count in the push badge
Clients are likely to be confused if we send a push but the badge count
is the unread messages one, and not the notifications one.
* Changelog
|
| | |
|
| |
| |
| |
| |
| | |
ordering after transmission (#8247)
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
|
|/
|
| |
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
|
|
|
| |
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
|
| |
|
|
|
|
|
|
|
| |
* Revert "Add experimental support for sharding event persister. (#8170)"
This reverts commit 82c1ee1c22a87b9e6e3179947014b0f11c0a1ac3.
* Changelog
|
|
|
| |
I think this is simpler (and moves stuff out of the db threads)
|
| |
|
|
|
|
|
| |
We have three things which all call `_get_e2e_device_keys_and_signatures_txn`
with their own `runInteraction`. Factor out the common code.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
We can use the existing `_get_e2e_device_keys_and_signatures_txn` instead of
creating our own txn function
|
|
|
|
|
|
| |
This is *not* ready for production yet. Caveats:
1. We should write some tests...
2. The stream token that we use for events can get stalled at the minimum position of all writers. This means that new events may not be processed and e.g. sent down sync streams if a writer isn't writing or is slow.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add shared_rooms api
* Add changelog
* Add .
* Wrap response in {"rooms": }
* linting
* Add unstable_features key
* Remove options from isort that aren't part of 5.x
`-y` and `-rc` are now default behaviour and no longer exist.
`dont-skip` is no longer required
https://timothycrosley.github.io/isort/CHANGELOG/#500-penny-july-4-2020
* Update imports to make isort happy
* Add changelog
* Update tox.ini file with correct invocation
* fix linting again for isort
* Vendor prefix unstable API
* Fix to match spec
* import Codes
* import Codes
* Use FORBIDDEN
* Update changelog.d/7785.feature
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
* Implement get_shared_rooms_for_users
* a comma
* trailing whitespace
* Handle the easy feedback
* Switch to using runInteraction
* Add tests
* Feedback
* Seperate unstable endpoint from v2
* Add upgrade node
* a line
* Fix style by adding a blank line at EOF.
* Update synapse/storage/databases/main/user_directory.py
Co-authored-by: Tulir Asokan <tulir@maunium.net>
* Update synapse/storage/databases/main/user_directory.py
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
* Update UPGRADE.rst
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
* Fix UPGRADE/CHANGELOG unstable paths
unstable unstable unstable
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Co-authored-by: Tulir Asokan <tulir@maunium.net>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Tulir Asokan <tulir@maunium.net>
|
| |
|
|
|
|
| |
this makes it a bit clearer what's going on.
|
| |
|
| |
|
|
|
|
|
| |
... to `_get_e2e_device_keys_and_signatures_txn`, to better reflect what it
does.
|
| |
|
| |
|
|
|
|
|
| |
(#8203)
This is so that we can use it for the backfill events stream.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Move `get_devices_with_keys_by_user` to `EndToEndKeyWorkerStore`
this seems a better fit for it.
This commit simply moves the existing code: no other changes at all.
* Rename `get_devices_with_keys_by_user`
to better reflect what it does.
* get_device_stream_token abstract method
To avoid referencing fields which are declared in the derived classes, make
`get_device_stream_token` abstract, and define that in the classes which define
`_device_list_id_gen`.
|
|
|
|
|
|
|
|
|
|
|
| |
... and to show that it does something slightly different to
`_get_e2e_device_keys_txn`.
`include_all_devices` and `include_deleted_devices` were never used (and
`include_deleted_devices` was broken, since that would cause `None`s in the
result which were not handled in the loop below.
Add some typing too.
|
| |
|
|
|
|
| |
async (#8197)
|
|
|
|
|
| |
There's not much point in returning all the others, and some people have a
silly number of devices.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
This was forgotten in #8164.
|
|
|
|
| |
insert_many, delete_*) (#8168)
|
| |
|
|
|
|
| |
(#8171)
|
| |
|
| |
|
|
|
|
| |
This is mainly so that `StreamIdGenerator` and `MultiWriterIdGenerator`
will have the same interface, allowing them to be used interchangeably.
|
|
|
| |
Some fixes to wording I noticed after merging #7377.
|
|
|
|
|
| |
* Search in columns 'name' and 'displayname' in the admin users endpoint
Signed-off-by: Manuel Stahl <manuel.stahl@awesome-technologies.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
request_token_inhibit_3pid_errors is turned on (#7991)
* Don't raise session_id errors on submit_token if request_token_inhibit_3pid_errors is set
* Changelog
* Also wait some time before responding to /requestToken
* Incorporate review
* Update synapse/storage/databases/main/registration.py
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
* Incorporate review
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
|
| |
|
| |
|
|
|
|
| |
It turns out that not all out-of-band membership events are labelled as such,
so we need to be more accepting here.
|
| |
|
| |
|
| |
|
|
|
| |
... otherwise it gets leaked.
|
|
|
|
|
| |
It's just a thin wrapper around two ID gens to make `get_current_token`
and `get_next` return tuples. This can easily be replaced by calling the
appropriate methods on the underlying ID gens directly.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function is used for two purposes: 1) for subscribers of streams to
get a token they can use to get further updates with, and 2) for
replication to track position of the writers of the stream.
For streams with a single writer the two scenarios produce the same
result, however the situation becomes complicated for streams with
multiple writers. The current `MultiWriterIdGenerator` does not
correctly handle the first case (which is not an issue as its only used
for the `caches` stream which nothing subscribes to outside of
replication).
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
async/await (#8063)
|
| |
|
|\
| |
| | |
Change HomeServer definition to work with typing.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Duplicating function signatures between server.py and server.pyi is
silly. This commit changes that by changing all `build_*` methods to
`get_*` methods and changing the `_make_dependency_method` to work work
as a descriptor that caches the produced value.
There are some changes in other files that were made to fix the typing
in server.py.
|
| | |
|
|/ |
|
|\
| |
| | |
With an undocumented configuration setting to enable them for specific users.
|
| |\
| | |
| | |
| | | |
babolivier/new_push_rules
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| |\ \
| | | |
| | | |
| | | | |
babolivier/new_push_rules
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
database to async (#8042)
|
| | | | |
|
| | | | |
|
| | | | |
|
| |_|/
|/| | |
|
| | | |
|
| | | |
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
`StatsHandler` handles updates to the `current_state_delta_stream`, and updates room stats such as the amount of state events, joined users, etc.
However, it counts every new join membership as a new user entering a room (and that user being in another room), whereas it's possible for a user's membership status to go from join -> join, for instance when they change their per-room profile information.
This PR adds a check for join->join membership transitions, and bails out early, as none of the further checks are necessary at that point.
Due to this bug, membership stats in many rooms have ended up being wildly larger than their true values. I am not sure if we also want to include a migration step which recalculates these statistics (possibly using the `_populate_stats_process_rooms` bg update).
Bug introduced in the initial implementation https://github.com/matrix-org/synapse/pull/4338.
|
|\ \ |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | | |
Idea from matrix-org/synapse-dinsic#49
|
|/ / |
|
| | |
|
|/ |
|
|
|
|
|
| |
It serves no purpose and updating everytime we write to the device inbox
stream means all such transactions will conflict, causing lots of
transaction failures and retries.
|
|
|
|
| |
It's somewhat expected for us to have unknown room versions in the
database due to room version experiments.
|
|
|
|
| |
objects. (#7849)
|
|\
| |
| | |
Fix guest user registration with lots of client readers
|
| | |
|
| | |
|
| |
| |
| |
| | |
partly just to show it works, but alwo to remove a bit of code duplication.
|
| | |
|
|/
|
|
|
|
|
|
| |
When considering rooms to clean up in `delete_old_current_state_events`, skip
rooms which we are creating, which otherwise look a bit like rooms we have
left.
Fixes #7834.
|
|
|
|
|
|
|
|
|
| |
As far as I can tell from the sentry logs, the only time this has actually done
anything in the last two years is when we had two master workers running at
once, and even then, it made a bit of a mess of it (see
https://github.com/matrix-org/synapse/issues/7845#issuecomment-658238739).
Generally I feel like this code is doing more harm than good.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Delete Room admin API allows server admins to remove rooms from server
and block these rooms.
`DELETE /_synapse/admin/v1/rooms/<room_id>`
It is a combination and improvement of "[Shutdown room](https://github.com/matrix-org/synapse/blob/develop/docs/admin_api/shutdown_room.md)" and "[Purge room](https://github.com/matrix-org/synapse/blob/develop/docs/admin_api/purge_room.md)" API.
Fixes: #6425
It also fixes a bug in [synapse/storage/data_stores/main/room.py](synapse/storage/data_stores/main/room.py) in ` get_room_with_stats`.
It should return `None` if the room is unknown. But it returns an `IndexError`.
https://github.com/matrix-org/synapse/blob/901b1fa561e3cc661d78aa96d59802cf2078cb0d/synapse/storage/data_stores/main/room.py#L99-L105
Related to:
- #5575
- https://github.com/Awesome-Technologies/synapse-admin/issues/17
Signed-off-by: Dirk Klimpel dirk@klimpel.org
|
|\ |
|
| |\ |
|
| | | |
|
| | | |
|
|/ / |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes #2181.
The basic premise is that, when we
fail to reject an invite via the remote server, we can generate our own
out-of-band leave event and persist it as an outlier, so that we have something
to send to the client.
|
| | |
|
| |
| |
| |
| |
| | |
This table is no longer used, so we may as well stop populating it. Removing it
would prevent people rolling back to older releases of Synapse, so that can
happen in a future release.
|
| | |
|
| |
| |
| | |
The CI appears to use the latest version of isort, which is a problem when isort gets a major version bump. Rather than try to pin the version, I've done the necessary to make isort5 happy with synapse.
|
| |
| |
| | |
This makes it much easier to find where streams are referenced.
|
|/ |
|
| |
|
| |
|
|
|
|
|
| |
* Always return an unread_count in get_unread_event_push_actions_by_room_for_user
* Don't always expect unread_count to be there so we don't take out sync entirely if something goes wrong
|
|\
| |
| | |
Implementation of https://github.com/matrix-org/matrix-doc/pull/2625
|
| |\ |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is a potential solution to https://github.com/vector-im/riot-web/issues/3374
and https://github.com/vector-im/riot-web/issues/5953
as raised by Mozilla at https://github.com/vector-im/riot-web/issues/10868.
This lets you define a push rule action which increases the badge count (unread notification)
count on a given room, but doesn't actually send a push for that notification via email or HTTP.
We might want to define this as the default behaviour for group chats in future
to solve https://github.com/vector-im/riot-web/issues/3268 at last.
This is implemented as a string action rather than a tweak because:
* Other pushers don't care about the tweak, given they won't ever get pushed
* The DB can store the tweak more efficiently using the existing `notify` table.
* It avoids breaking the default_notif/highlight_action optimisations.
Clients which generate their own notifs (e.g. desktop notifs from Riot/Web
would need to be aware of the new push action) to uphold it.
An alternative way to do this would be to maintain a `msg_count` alongside
`highlight_count` and `notification_count` in `unread_notifications` in sync responses.
However, doing this by counting the rows in `events` since the `stream_position`
of the user's last read receipt turns out to be painfully slow (~200ms), perhaps
due to the size of the events table. So instead, we use the highly optimised
existing event_push_actions (and event_push_actions_staging) table to maintain
the counts - using the code paths which already exist for tracking unread
notification counts efficiently. These queries are typically ~3ms or so.
The biggest issues I see here are:
* We're slightly repurposing the `notif` field on `event_push_actions` to
track whether a given action actually sent a `push` or not. This doesn't
seem unreasonable, but it's slightly naughty given that previously the
field explicitly tracked whether `notify` was true for the action (and
as a result, it was uselessly always set to 1 in the DB).
* We're going to put more load on the `event_push_actions` table for all the
random group chats which people had previously muted. In practice i don't
think there are many of these though.
* There isn't an MSC for this yet (although this comment could become one).
|
| | | |
| | | |
| | | | |
The aim here is to make it easier to reason about when streams are limited and when they're not, by moving the logic into the database functions themselves. This should mean we can kill of `db_query_to_update_function` function.
|
| | | | |
|
| |_|/
|/| | |
|
| | | |
|
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Ensure account data stream IDs are unique.
The account data stream is shared between three tables, and the maximum
allocated ID was tracked in a dedicated table. Updating the max ID
happened outside the transaction that allocated the ID, leading to a
race where if the server was restarted then the same ID could be
allocated but the max ID failed to be updated, leading it to be reused.
The ID generators have support for tracking across multiple tables, so
we may as well use that instead of a dedicated table.
* Fix bug in account data replication stream.
If the same stream ID was used in both global and room account data then
the getting updates for the replication stream would fail due to
`heapq.merge(..)` trying to compare a `str` with a `None`. (This is
because you'd have two rows like `(534, '!room')` and `(534, None)` from
the room and global account data tables).
Fix is just to order by stream ID, since we don't rely on the ordering
beyond that. The bug where stream IDs can be reused should be fixed now,
so this case shouldn't happen going forward.
Fixes #7617
|
| |
| |
| |
| |
| | |
Based on #7619
async's `get_user_id_by_threepid` and its call stack.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The query keeps showing up in my slow query log.
This changes the plan under the top-level Sort node from
```
WindowAgg (cost=280335.88..292963.15 rows=561212 width=80) (actual time=138.651..160.562 rows=27112 loops=1)
-> Sort (cost=280335.88..281738.91 rows=561212 width=84) (actual time=138.597..140.622 rows=27112 loops=1)
Sort Key: state_groups_state.type, state_groups_state.state_key, state_groups_state.state_group
Sort Method: quicksort Memory: 4581kB
-> Nested Loop (cost=2.83..226745.22 rows=561212 width=84) (actual time=21.548..47.657 rows=27112 loops=1)
-> HashAggregate (cost=2.27..3.28 rows=101 width=8) (actual time=21.526..21.535 rows=20 loops=1)
Group Key: state.state_group
-> CTE Scan on state (cost=0.00..2.02 rows=101 width=8) (actual time=21.280..21.493 rows=20 loops=1)
-> Index Scan using state_groups_state_type_idx on state_groups_state (cost=0.56..2189.40 rows=5557 width=84) (actual time=0.005..0.991 rows=1356 loops=20)
Index Cond: (state_group = state.state_group)
```
to
```
Nested Loop (cost=2.83..226745.22 rows=561212 width=84) (actual time=24.194..52.834 rows=27112 loops=1)
-> HashAggregate (cost=2.27..3.28 rows=101 width=8) (actual time=24.130..24.138 rows=20 loops=1)
Group Key: state.state_group
-> CTE Scan on state (cost=0.00..2.02 rows=101 width=8) (actual time=23.887..24.113 rows=20 loops=1)
-> Index Scan using state_groups_state_type_idx on state_groups_state (cost=0.56..2189.40 rows=5557 width=84) (actual time=0.016..1.159 rows=1356 loops=20)
Index Cond: (state_group = state.state_group)
```
This cuts the execution time from ~190ms to ~130ms, i.e. a reduction
of ~30%.
The full plans are visualised at https://explain.depesz.com/s/WpbT and
https://explain.depesz.com/s/KlEk
Signed-off-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
|
| |
| |
| |
| |
| | |
Fixes #7469
Signed-off-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
|
| |
| |
| | |
We were using `logger` syntax which isn't supported by `Exception`s.
|
| |
| |
| |
| |
| |
| | |
The bg update never managed to complete, because it kept being interrupted by
transactions which want to take a lock.
Just doing it in the foreground isn't that bad, and is a good deal simpler.
|
| |
| |
| |
| |
| |
| | |
we can use `make_in_list_sql_clause` rather than doing our own half-baked
equivalent, which has the benefit of working just fine with empty lists.
(This has quite a lot of tests, so I think it's pretty safe)
|
| |
| |
| | |
These are surprisingly expensive, and we only really need to do them at startup.
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
The idea here is that if an instance persists an event via the replication HTTP API it can return before we receive that event over replication, which can lead to races where code assumes that persisting an event immediately updates various caches (e.g. current state of the room).
Most of Synapse doesn't hit such races, so we don't do the waiting automagically, instead we do so where necessary to avoid unnecessary delays. We may decide to change our minds here if it turns out there are a lot of subtle races going on.
People probably want to look at this commit by commit.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When a call to `user_device_resync` fails, we don't currently mark the remote user's device list as out of sync, nor do we retry to sync it.
https://github.com/matrix-org/synapse/pull/6776 introduced some code infrastructure to mark device lists as stale/out of sync.
This commit uses that code infrastructure to mark device lists as out of sync if processing an incoming device list update makes the device handler realise that the device list is out of sync, but we can't resync right now.
It also adds a looping call to retry all failed resync every 30s. This shouldn't cause too much spam in the logs as this commit also removes the "Failed to handle device list update for..." warning logs when catching `NotRetryingDestination`.
Fixes #7418
|
| |
| |
| |
| |
| | |
`_is_server_still_joined` will throw if it is given state updates with non-user ID state keys with local user leaves. This is actually rarely a problem since local leaves almost always get persisted by themselves.
(I discovered this on a branch that was otherwise broken, so I haven't seen this in the wild)
|
|\ \
| | |
| | | |
Kill off some old python 2 code
|
| | |
| | |
| | |
| | | |
this is no longer needed on python 3
|
| | |
| | |
| | |
| | | |
this is a no-op on python 3.
|
| | |
| | |
| | |
| | | |
this is a no-op on python 3.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Make sure that the AccountDataStream presents complete updates, in the right
order.
This is much the same fix as #7337 and #7358, but applied to a different stream.
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
This is required as both event persistence and the background update needs access to this function. It should be perfectly safe for two workers to write to that table at the same time.
|
| | |
| | |
| | |
| | | |
queries (#7465)
|
| | |
| | |
| | |
| | |
| | | |
This allows us to have the logic on both master and workers, which is necessary to move event persistence off master.
We also combine the instantiation of ID generators from DataStore and slave stores to the base worker stores. This allows us to select which process writes events independently of the master/worker splits.
|
|/ /
| |
| |
| | |
update_remote_profile_cache (#7511)
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Synapse 1.13.0rc2 (2020-05-14)
==============================
Bugfixes
--------
- Fix a long-standing bug which could cause messages not to be sent over federation, when state events with state keys matching user IDs (such as custom user statuses) were received. ([\#7376](https://github.com/matrix-org/synapse/issues/7376))
- Restore compatibility with non-compliant clients during the user interactive authentication process, fixing a problem introduced in v1.13.0rc1. ([\#7483](https://github.com/matrix-org/synapse/issues/7483))
Internal Changes
----------------
- Fix linting errors in new version of Flake8. ([\#7470](https://github.com/matrix-org/synapse/issues/7470))
|