| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
This uses a simplified version of get_chain_cover_difference to calculate
auth chain of events.
|
|
|
|
|
| |
Unfortunately this doesn't test re-joining the room since
that requires having another homeserver to query over
federation, which isn't easily doable in unit tests.
|
| |
|
|
|
|
|
|
|
| |
- Update black version to the latest
- Run black auto formatting over the codebase
- Run autoformatting according to [`docs/code_style.md
`](https://github.com/matrix-org/synapse/blob/80d6dc9783aa80886a133756028984dbf8920168/docs/code_style.md)
- Update `code_style.md` docs around installing black to use the correct version
|
| |
|
|
|
|
|
|
| |
We do this by allowing a single iteration to process multiple rooms at a
time, as there are often a lot of really tiny rooms, which can massively
slow things down.
|
| |
|
|
|
| |
This only applies if the user's data is to be erased.
|
| |
|
|
|
|
|
|
| |
This allows for efficiently finding which users ignore a particular
user.
Co-authored-by: Erik Johnston <erik@matrix.org>
|
|
|
|
| |
If we see stale extremities while persisting events, and notice that
they don't change the result of state resolution, we drop them.
|
|
|
|
|
| |
* Use the simple dictionary in fts for the user directory
* Clarify naming
|
| |
|
| |
|
|
|
|
|
| |
This is so that we can choose which algorithm to use based on the room ID.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replaces the `federation_ip_range_blacklist` configuration setting with an
`ip_range_blacklist` setting with wider scope. It now applies to:
* Federation
* Identity servers
* Push notifications
* Checking key validitity for third-party invite events
The old `federation_ip_range_blacklist` setting is still honored if present, but
with reduced scope (it only applies to federation and identity servers).
|
|
|
|
|
|
|
|
|
| |
(#8827)
We do state res with unpersisted events when calculating the new current state of the room, so that should be the only thing impacted. I don't think this is tooooo big of a deal as:
1. the next time a state event happens in the room the current state should correct itself;
2. in the common case all the unpersisted events' auth events will be pulled in by other state, so will still return the correct result (or one which is sufficiently close to not affect the result); and
3. we mostly use the state at an event to do important operations, which isn't affected by this.
|
|
|
| |
These are now only available via `/_synapse/admin/v1`.
|
|\
| |
| | |
Make `make_request` actually render the request
|
| | |
|
|/ |
|
| |
|
|
|
|
| |
Some tests want to set some custom HTTP request headers, so provide a way to do
that before calling requestReceived().
|
|
|
|
|
|
|
|
|
|
| |
another user. (#8616)
We do it this way round so that only the "owner" can delete the access token (i.e. `/logout/all` by the "owner" also deletes that token, but `/logout/all` by the "target user" doesn't).
A future PR will add an API for creating such a token.
When the target user and authenticated entity are different the `Processed request` log line will be logged with a: `{@admin:server as @bob:server} ...`. I'm not convinced by that format (especially since it adds spaces in there, making it harder to use `cut -d ' '` to chop off the start of log lines). Suggestions welcome.
|
|
|
|
|
|
|
|
| |
This allows trailing commas in multi-line arg lists.
Minor, but we might as well keep our formatting current with regard to
our minimum supported Python version.
Signed-off-by: Dan Callahan <danc@element.io>
|
| |
|
| |
|
| |
|
|
|
| |
Optionally sends typing, presence, and read receipt information to appservices.
|
|\
| |
| | |
Rename Cache to DeferredCache, and related changes
|
| | |
|
| | |
|
| | |
|
|/
|
|
|
| |
Update `EventCreationHandler.create_event` to accept an auth_events param, and
use it in `_locally_reject_invite` instead of reinventing the wheel.
|
|
|
|
|
| |
Currently background proccesses stream the events stream use the "minimum persisted position" (i.e. `get_current_token()`) rather than the vector clock style tokens. This is broadly fine as it doesn't matter if the background processes lag a small amount. However, in extreme cases (i.e. SyTests) where we only write to one event persister the background processes will never make progress.
This PR changes it so that the `MultiWriterIDGenerator` keeps the current position of a given instance as up to date as possible (i.e using the latest token it sees if its not in the process of persisting anything), and then periodically announces that over replication. This then allows the "minimum persisted position" to advance, albeit with a small lag.
|
|
|
|
|
|
| |
We call `_update_stream_positions_table_txn` a lot, which is an UPSERT
that can conflict in `REPEATABLE READ` isolation level. Instead of doing
a transaction consisting of a single query we may as well run it outside
of a transaction.
|
|
|
|
|
| |
This is so we can tell what is going on when things are taking a while to start up.
The main change here is to ensure that transactions that are created during startup get correctly logged like normal transactions.
|
|
|
| |
The idea is that in future tokens will encode a mapping of instance to position. However, we don't want to include the full instance name in the string representation, so instead we'll have a mapping between instance name and an immutable integer ID in the DB that we can use instead. We'll then do the lookup when we serialize/deserialize the token (we could alternatively pass around an `Instance` type that includes both the name and ID, but that turns out to be a lot more invasive).
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was a bit unweildy for what I wanted: in particular, I wanted to assign
each measurement straight into a bucket, rather than storing an intermediate
Counter which didn't do any bucketing at all.
I've replaced it with something that is hopefully a bit easier to use.
(I'm not entirely sure what the difference between a HistogramMetricFamily and
a GaugeHistogramMetricFamily is, but given our counters can go down as well as
up the latter *sounds* more accurate?)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix table scan of events on worker startup.
This happened because we assumed "new" writers had an initial stream
position of 0, so the replication code tried to fetch all events written
by the instance between 0 and the current position.
Instead, set the initial position of new writers to the current
persisted up to position, on the assumption that new writers won't have
written anything before that point.
* Consider old writers coming back as "new".
Otherwise we'd try and fetch entries between the old stale token and the
current position, even though it won't have written any rows.
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
|
|
|
| |
This is an attempt to fix #8403.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On startup `MultiWriteIdGenerator` fetches the maximum stream ID for
each instance from the table and uses that as its initial "current
position" for each writer. This is problematic as a) it involves either
a scan of events table or an index (neither of which is ideal), and b)
if rows are being persisted out of order elsewhere while the process
restarts then using the maximum stream ID is not correct. This could
theoretically lead to race conditions where e.g. events that are
persisted out of order are not sent down sync streams.
We fix this by creating a new table that tracks the current positions of
each writer to the stream, and update it each time we finish persisting
a new entry. This is a relatively small overhead when persisting events.
However for the cache invalidation stream this is a much bigger relative
overhead, so instead we note that for invalidation we don't actually
care about reliability over restarts (as there's no caches to
invalidate) and simply don't bother reading and writing to the new table
in that particular case.
|
|
|
| |
This will allow us to hit the DB after we've finished using the generated stream ID.
|
|
|
|
|
|
|
| |
This converts calls like super(Foo, self) -> super().
Generated with:
sed -i "" -Ee 's/super\([^\(]+\)/super()/g' **/*.py
|
|
|
|
|
| |
It did not correctly handle IDs finishing being persisted out of
order, resulting in the `current_position` lagging until new IDs are
persisted.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
(#8203)
This is so that we can use it for the backfill events stream.
|
|
|
|
|
|
|
|
|
|
|
| |
... and to show that it does something slightly different to
`_get_e2e_device_keys_txn`.
`include_all_devices` and `include_deleted_devices` were never used (and
`include_deleted_devices` was broken, since that would cause `None`s in the
result which were not handled in the loop below.
Add some typing too.
|
| |
|
| |
|
| |
|
|
|
| |
This was forgotten in #8164.
|
|
|
|
| |
insert_many, delete_*) (#8168)
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
request_token_inhibit_3pid_errors is turned on (#7991)
* Don't raise session_id errors on submit_token if request_token_inhibit_3pid_errors is set
* Changelog
* Also wait some time before responding to /requestToken
* Incorporate review
* Update synapse/storage/databases/main/registration.py
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
* Incorporate review
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function is used for two purposes: 1) for subscribers of streams to
get a token they can use to get further updates with, and 2) for
replication to track position of the writers of the stream.
For streams with a single writer the two scenarios produce the same
result, however the situation becomes complicated for streams with
multiple writers. The current `MultiWriterIdGenerator` does not
correctly handle the first case (which is not an issue as its only used
for the `caches` stream which nothing subscribes to outside of
replication).
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
I think this would have caught all the cases in
https://github.com/matrix-org/synapse/issues/7642 - and I think a 500 makes
more sense here than a 403
|
|
|
|
| |
database to async (#8042)
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Delete Room admin API allows server admins to remove rooms from server
and block these rooms.
`DELETE /_synapse/admin/v1/rooms/<room_id>`
It is a combination and improvement of "[Shutdown room](https://github.com/matrix-org/synapse/blob/develop/docs/admin_api/shutdown_room.md)" and "[Purge room](https://github.com/matrix-org/synapse/blob/develop/docs/admin_api/purge_room.md)" API.
Fixes: #6425
It also fixes a bug in [synapse/storage/data_stores/main/room.py](synapse/storage/data_stores/main/room.py) in ` get_room_with_stats`.
It should return `None` if the room is unknown. But it returns an `IndexError`.
https://github.com/matrix-org/synapse/blob/901b1fa561e3cc661d78aa96d59802cf2078cb0d/synapse/storage/data_stores/main/room.py#L99-L105
Related to:
- #5575
- https://github.com/Awesome-Technologies/synapse-admin/issues/17
Signed-off-by: Dirk Klimpel dirk@klimpel.org
|
|
|
| |
... instead of duplicating `config.signing_key[0]` everywhere
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Calls `self.get_success` on all deferred methods instead of abusing `self.pump()`. This has the benefit of working with coroutines, as well as checking that method execution completed successfully.
There are also a few small cleanups that I made in the process.
|
|
|
| |
These are surprisingly expensive, and we only really need to do them at startup.
|
|
|
|
|
|
|
| |
The idea here is that if an instance persists an event via the replication HTTP API it can return before we receive that event over replication, which can lead to races where code assumes that persisting an event immediately updates various caches (e.g. current state of the room).
Most of Synapse doesn't hit such races, so we don't do the waiting automagically, instead we do so where necessary to avoid unnecessary delays. We may decide to change our minds here if it turns out there are a lot of subtle races going on.
People probably want to look at this commit by commit.
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.13.0rc2 (2020-05-14)
==============================
Bugfixes
--------
- Fix a long-standing bug which could cause messages not to be sent over federation, when state events with state keys matching user IDs (such as custom user statuses) were received. ([\#7376](https://github.com/matrix-org/synapse/issues/7376))
- Restore compatibility with non-compliant clients during the user interactive authentication process, fixing a problem introduced in v1.13.0rc1. ([\#7483](https://github.com/matrix-org/synapse/issues/7483))
Internal Changes
----------------
- Fix linting errors in new version of Flake8. ([\#7470](https://github.com/matrix-org/synapse/issues/7470))
|
| |
| |
| |
| |
| |
| |
| |
| | |
Fix a bug where the `get_joined_users` cache could be corrupted by custom
status events (or other state events with a state_key matching the user ID).
The bug was introduced by #2229, but has largely gone unnoticed since then.
Fixes #7099, #7373.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The aim here is to get to a stage where we have a `PersistEventStore` that holds all the write methods used during event persistence, so that we can take that class out of the `DataStore` mixin and instansiate it separately. This will allow us to instansiate it on processes other than master, while also ensuring it is only available on processes that are configured to write to events stream.
This is a bit of an architectural change, where we end up with multiple classes per data store (rather than one per data store we have now). We end up having:
1. Storage classes that provide high level APIs that can talk to multiple data stores.
2. Data store modules that consist of classes that must point at the same database instance.
3. Classes in a data store that can be instantiated on processes depending on config.
|
| |
| |
| |
| | |
variables (#6391)
|
| | |
|
|/
|
|
|
|
| |
This will be used to coordinate stream IDs across multiple writers.
Functions as the equivalent of both `StreamIdGenerator` and
`SlavedIdTracker`.
|
|
|
|
|
| |
(#6881)
Signed-off-by: Manuel Stahl <manuel.stahl@awesome-technologies.de>
|
|
|
|
| |
We seem to have some duplicates, which could do with being cleared out.
|
|
|
|
| |
returning a None or an int that we don't use is confusing.
|
|
|
|
| |
(Almost) everywhere that uses it is happy with an awaitable.
|
|
|
|
| |
This was only used in a unit test, so let's just inline it in the test.
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Add 'device_lists_outbound_pokes' as extra table.
This makes sure we check all the relevant tables to get the current max
stream ID.
Currently not doing so isn't problematic as the max stream ID in
`device_lists_outbound_pokes` is the same as in `device_lists_stream`,
however that will change.
* Change device lists stream to have one row per id.
This will make it possible to process the streams more incrementally,
avoiding having to process large chunks at once.
* Change device list replication to match new semantics.
Instead of sending down batches of user ID/host tuples, send down a row
per entity (user ID or host).
* Newsfile
* Remove handling of multiple rows per ID
* Fix worker handling
* Comments from review
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It was originally implemented by pulling the full auth chain of all
state sets out of the database and doing set comparison. However, that
can take a lot work if the state and auth chains are large.
Instead, lets try and fetch the auth chains at the same time and
calculate the difference on the fly, allowing us to bail early if all
the auth chains converge. Assuming that the auth chains do converge more
often than not, this should improve performance. Hopefully.
|
|/
|
|
|
| |
* Break down monthly active users by appservice_id and emit via prometheus.
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
|
|
|
|
| |
Ensure good comprehension hygiene using flake8-comprehensions.
|
|
|
|
|
| |
this amounts to the same thing, but replaces `_event_dict` with `_dict`, and
removes some of the function layers generated by `property`.
|
|
|
|
| |
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
|
|
|
|
|
|
|
|
|
|
| |
* Bump signedjson to 1.1
... so that we can use the type definitions
* Fix breakage caused by upgrade to signedjson 1.1
Thanks, @illicitonion...
|
|
|
| |
This is so that we don't have to rely on pulling it out from `current_state_events` table.
|
|
|
|
|
|
| |
There are quite a few places that we assume that a redaction event has a
corresponding `redacts` key, which is not always the case. So lets
cheekily make it so that event.redacts just returns None instead.
|
|
|
|
|
|
|
| |
Currently we rely on `current_state_events` to figure out what rooms a
user was in and their last membership event in there. However, if the
server leaves the room then the table may be cleaned up and that
information is lost. So lets add a table that separately holds that
information.
|
|
|
| |
Signed-off-by: Manuel Stahl <manuel.stahl@awesome-technologies.de>
|
| |
|
| |
|
|
|
|
| |
... to make way for a new method which just returns the event ids
|
| |
|
|
|
|
|
| |
This encapsulates config for a given database and is the way to get new
connections.
|
|
|
|
| |
Stop the `update_client_ips` background job from recreating deleted devices.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| |
| | |
erikj/split_purge_history
|
| |\ |
|
| | | |
|
| |\ \ |
|
| |\ \ \ |
|
| | | | | |
|
|\ \ \ \ \
| | |_|_|/
| |/| | |
| | | | | |
erikj/split_purge_history
|
| | |_|/
| |/| | |
|
|/ / / |
|
|\ \ \
| | | |
| | | |
| | | | |
erikj/split_out_persistence_store
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This makes it easier to use in an async/await world.
Also fixes a bug where cache descriptors would occaisonally return a raw
value rather than a deferred.
|
| | | | |
|
| | | | |
|
| | | | |
|
| | |/
| |/| |
|
|/ / |
|
|/
|
|
|
| |
This is in preparation for having multiple data stores that offer
different functionality, e.g. splitting out state or event storage.
|
|\
| |
| | |
make storage layer in charge of interpreting the device key data
|
| | |
|
| | |
|
|/ |
|
|\
| |
| | |
Fix errors storing large retry intervals.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We have set the max retry interval to a value larger than a postgres or
sqlite int can hold, which caused exceptions when updating the
destinations table.
To fix postgres we need to change the column to a bigint, and for sqlite
we lower the max interval to 2**62 (which is still incredibly long).
|
|/
|
|
|
|
| |
Fetching a censored redactions caused an exception due to the code
expecting redactions to have a `redact` key, which redacted redactions
don't have.
|
|
|
| |
Fixes #5905
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This is a) simpler than querying user_ips directly and b) means we can
purge older entries from user_ips without losing the required info.
The storage functions now no longer return the access_token, since it
was unused.
|
|
|
|
| |
Track the time that a server started failing at, for general analysis purposes.
|
|\
| |
| | |
Censor redactions in DB after a month
|
| | |
|
| | |
|
| | |
|
|/ |
|
| |
|
|\
| |
| | |
Fix handling of redactions of redactions
|
| | |
|
|\ \
| |/
|/| |
Add unit test for current state membership bg update
|
| | |
|
| | |
|
|/ |
|
| |
|
|
|
|
| |
Record how long an access token is valid for, and raise a soft-logout once it
expires.
|
|
|
|
|
| |
The 'token' param is no longer used anywhere except the tests, so let's kill
that off too.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Adds new config option `cleanup_extremities_with_dummy_events` which
periodically sends dummy events to rooms with more than 10 extremities.
THIS IS REALLY EXPERIMENTAL.
|
| |
|
| |
|
| |
|
|
|
| |
fixes #5153
|
|
|
|
| |
Set default room version to v4.
|
| |
|
|
|
|
|
|
|
|
| |
This is a first step to checking that the key is valid at the required moment.
The idea here is that, rather than passing VerifyKey objects in and out of the
storage layer, we instead pass FetchKeyResult objects, which simply wrap the
VerifyKey and add a valid_until_ts field.
|
|
|
|
|
| |
Storing server keys hammered the database a bit. This replaces the
implementation which stored a single key, with one which can do many updates at
once.
|
|
|
|
| |
This is in preparation for reaction work which requires it.
|
| |
|
| |
|
|
|
|
| |
It doesn't really belong under rest/client/v1 any more.
|
|
|
|
| |
Rewrite this so that it doesn't hammer the database.
|
| |
|
|
|
| |
Remove presence list support as per MSC 1819
|
|
|
|
| |
Collect all the things that make room-versions different to one another into
one place, so that it's easier to define new room versions.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| |
| | |
erikj/require_format_version
|
| | |
|
|/ |
|
|
|
|
| |
NULL in upserts. (#4369)
|
|
|
|
|
|
| |
Allow for the creation of a support user.
A support user can access the server, join rooms, interact with other users, but does not appear in the user directory nor does it contribute to monthly active user limits.
|
|
|
|
|
| |
When we register a new user from SAML2 data, initialise their displayname
correctly.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Currently when fetching state groups from the data store we make two
hits two the database: once for members and once for non-members (unless
request is filtered to one or the other). This adds needless load to the
datbase, so this PR refactors the lookup to make only a single database
hit.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
I spent ages trying to figure out how I was going mad...
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Refactor state module to support multiple room versions
|
| |\
| | |
| | |
| | | |
erikj/refactor_state_handler
|
| | | |
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| | |
Splits the state_group_cache in two.
One half contains normal state events; the other contains member events.
The idea is that the lazyloading common case of: "I want a subset of member events plus all of the other state" can be accomplished efficiently by splitting the cache into two, and asking for "all events" from the non-members cache, and "just these keys" from the members cache. This means we can avoid having to make DictionaryCache aware of these sort of complicated queries, whilst letting LL requests benefit from the caching.
Previously we were unable to sensibly use the caching and had to pull all state from the DB irrespective of the filtering, which made things slow. Hopefully fixes https://github.com/matrix-org/synapse/issues/3720.
|
| | |
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Block ability to read via sync if mau limit exceeded
|
| |/ |
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
This reverts commit ec716a35b219d147dee51733b55573952799a549.
|
| |
|
|\ |
|
| | |
|
| | |
|
| | |
|