| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
| |
For in memory streams when fetching updates on workers we need to query the source of the stream, which currently is hard coded to be master. This PR threads through the source instance we received via `POSITION` through to the update function in each stream, which can then be passed to the replication client for in memory streams.
|
|
|
|
| |
We move the processing of typing and federation replication traffic into their handlers so that `Stream.current_token()` points to a valid token. This allows us to remove `get_streams_to_replicate()` and `stream_positions()`.
|
| |
|
|
|
|
|
| |
By persisting the user interactive authentication sessions to the database, this fixes
situations where a user hits different works throughout their auth session and also
allows sessions to persist through restarts of Synapse.
|
|
|
|
|
| |
This is primarily for allowing us to send those commands from workers, but for now simply allows us to ignore echoed RDATA/POSITION commands that we sent (we get echoes of sent commands when using redis). Currently we log a WARNING on the master process every time we receive an echoed RDATA.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For direct TCP connections we need the master to relay REMOTE_SERVER_UP
commands to the other connections so that all instances get notified
about it. The old implementation just relayed to all connections,
assuming that sending back to the original sender of the command was
safe. This is not true for redis, where commands sent get echoed back to
the sender, which was causing master to effectively infinite loop
sending and then re-receiving REMOTE_SERVER_UP commands that it sent.
The fix is to ensure that we only relay to *other* connections and not
to the connection we received the notification from.
Fixes #7334.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Factor out functions for injecting events into database
I want to add some more flexibility to the tools for injecting events into the
database, and I don't want to clutter up HomeserverTestCase with them, so let's
factor them out to a new file.
* Rework TestReplicationDataHandler
This wasn't very easy to work with: the mock wrapping was largely superfluous,
and it's useful to be able to inspect the received rows, and clear out the
received list.
* Fix AssertionErrors being thrown by EventsStream
Part of the problem was that there was an off-by-one error in the assertion,
but also the limit logic was too simple. Fix it all up and add some tests.
|
|
|
|
|
| |
(#6881)
Signed-off-by: Manuel Stahl <manuel.stahl@awesome-technologies.de>
|
|
|
|
|
|
|
| |
Specifically some tests for the typing stream, which means we test streams that fetch missing updates via HTTP (rather than via the DB).
We also shuffle things around a bit so that we create two separate `HomeServer` objects, rather than trying to insert a slaved store into places.
Note: `test_typing.py` is heavily inspired by `test_receipts.py`
|
|
|
|
| |
When running the UTs against a postgres deatbase, we need to set the collation
correctly.
|
|
|
|
|
|
| |
matrix-org/babolivier/request_token""
This reverts commit 1adf6a55870aa08de272591ff49db9dc49738076.
|
|
|
| |
I messed this up last time I tried (#7239 / e13c6c7).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
First some background: StreamChangeCache is used to keep track of what "entities" have
changed since a given stream ID. So for example, we might use it to keep track of when the last
to-device message for a given user was received [1], and hence whether we need to pull any to-device messages from the database on a sync [2].
Now, it turns out that StreamChangeCache didn't support more than one thing being changed at
a given stream_id (this was part of the problem with #7206). However, it's entirely valid to send
to-device messages to more than one user at a time.
As it turns out, this did in fact work, because *some* methods of StreamChangeCache coped
ok with having multiple things changing on the same stream ID, and it seems we never actually
use the methods which don't work on the stream change caches where we allow multiple
changes at the same stream ID. But that feels horribly fragile, hence: let's update
StreamChangeCache to properly support this, and add some typing and some more tests while
we're at it.
[1]: https://github.com/matrix-org/synapse/blob/release-v1.12.3/synapse/storage/data_stores/main/deviceinbox.py#L301
[2]: https://github.com/matrix-org/synapse/blob/release-v1.12.3/synapse/storage/data_stores/main/deviceinbox.py#L47-L51
|
| |
|
|
|
| |
This is configured via the `redis` config options.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
We seem to have some duplicates, which could do with being cleared out.
|
|
|
| |
The aim here is to move the command handling out of the TCP protocol classes and to also merge the client and server command handling (so that we can reuse them for redis protocol). This PR simply moves the client paths to the new `ReplicationCommandHandler`, a future PR will move the server paths too.
|
|
|
|
|
|
|
|
|
| |
Fixes #6815
Before figuring out whether we should alert a user on MAU, we call get_notice_room_for_user to get some info on the existing server notices room for this user. This function, if the room doesn't exist, creates it and invites the user in it. This means that, if we decide later that no server notice is needed, the user gets invited in a room with no message in it. This happens at every restart of the server, since the room ID returned by get_notice_room_for_user is cached.
This PR fixes that by moving the inviting bit to a dedicated function, that's only called when the server actually needs to send a notice to the user. A potential issue with this approach is that the room that's created by get_notice_room_for_user doesn't match how that same function looks for an existing room (i.e. it creates a room that doesn't have an invite or a join for the current user in it, so it could lead to a new room being created each time a user syncs), but I'm not sure this is a problem given it's cached until the server restarts, so that function won't run very often.
It also renames get_notice_room_for_user into get_or_create_notice_room_for_user to make what it does clearer.
|
|\
| |
| | |
Only run one background update at a time
|
| |
| |
| |
| | |
returning a None or an int that we don't use is confusing.
|
| |
| |
| |
| |
| | |
This mostly just reduces the amount of "running from sentinel context" spam
during unittest setup.
|
| |
| |
| |
| | |
(Almost) everywhere that uses it is happy with an awaitable.
|
| |
| |
| |
| | |
This was only used in a unit test, so let's just inline it in the test.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Occasionally we could get a federation device list update transaction which
looked like:
```
[
{'edu_type': 'm.device_list_update', 'content': {'user_id': '@user:test', 'device_id': 'D2', 'prev_id': [], 'stream_id': 12, 'deleted': True}},
{'edu_type': 'm.device_list_update', 'content': {'user_id': '@user:test', 'device_id': 'D1', 'prev_id': [12], 'stream_id': 11, 'deleted': True}},
{'edu_type': 'm.device_list_update', 'content': {'user_id': '@user:test', 'device_id': 'D3', 'prev_id': [11], 'stream_id': 13, 'deleted': True}}
]
```
Having `stream_ids` which are lower than `prev_ids` looks odd. It might work
(I'm not actually sure), but in any case it doesn't seem like a reasonable
thing to expect other implementations to support.
|
|/ |
|
|
|
|
| |
make sure we clear out all but one update for the user
|
|\
| |
| | |
Add tests for outbound device pokes
|
| | |
|
| |
| |
| |
| |
| | |
this is never set to anything other than "test", and is a source of unnecessary
boilerplate.
|
| |
| |
| |
| |
| |
| |
| | |
That fallback sets the redirect URL to itself (so it can process the login
token then return gracefully to the client). This would make it pointless to
ask the user for confirmation, since the URL the confirmation page would be
showing wouldn't be the client's.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| | |
This changes the replication protocol so that the server does not send down `RDATA` for rows that happened before the client connected. Instead, the server will send a `POSITION` and clients then query the database (or master out of band) to get up to date.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Pull Sentinel out of LoggingContext
... and drop a few unnecessary references to it
* Factor out LoggingContext.current_context
move `current_context` and `set_context` out to top-level functions.
Mostly this means that I can more easily trace what's actually referring to
LoggingContext, but I think it's generally neater.
* move copy-to-parent into `stop`
this really just makes `start` and `stop` more symetric. It also means that it
behaves correctly if you manually `set_log_context` rather than using the
context manager.
* Replace `LoggingContext.alive` with `finished`
Turn `alive` into `finished` and make it a bit better defined.
|
|
|
|
|
| |
This just helps keep the rows closer to their streams, so that it's easier to
see what the format of each stream is.
|
|
|
|
|
| |
Attempts to clarify the sample config for databases, and add some stuff about
tcp keepalives to `postgres.md`.
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Add 'device_lists_outbound_pokes' as extra table.
This makes sure we check all the relevant tables to get the current max
stream ID.
Currently not doing so isn't problematic as the max stream ID in
`device_lists_outbound_pokes` is the same as in `device_lists_stream`,
however that will change.
* Change device lists stream to have one row per id.
This will make it possible to process the streams more incrementally,
avoiding having to process large chunks at once.
* Change device list replication to match new semantics.
Instead of sending down batches of user ID/host tuples, send down a row
per entity (user ID or host).
* Newsfile
* Remove handling of multiple rows per ID
* Fix worker handling
* Comments from review
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It was originally implemented by pulling the full auth chain of all
state sets out of the database and doing set comparison. However, that
can take a lot work if the state and auth chains are large.
Instead, lets try and fetch the auth chains at the same time and
calculate the difference on the fly, allowing us to bail early if all
the auth chains converge. Assuming that the auth chains do converge more
often than not, this should improve performance. Hopefully.
|
| |
| |
| |
| |
| |
| |
| | |
Extends #5794 etc to the SimpleHttpClient so that it also applies to non-federation requests.
Fixes #7092.
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
(#7053)"
This reverts commit 54dd28621b070ca67de9f773fe9a89e1f4dc19da, reversing
changes made to 6640460d054e8f4444046a34bdf638921b31c01e.
|
|\ \ |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| |/ |
|
| |
| |
| |
| | |
room ver. (#7037)
|
| |
| |
| |
| |
| | |
* Break down monthly active users by appservice_id and emit via prometheus.
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
|
| | |
|
| |
| |
| |
| |
| | |
This is a precursor to giving EventBase objects the knowledge of which room version they belong to.
|
|\ \ |
|
| | | |
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | | |
Fix #6910
|
| |/ / |
|
| |/
|/| |
|
| |
| |
| | |
Fix #6910
|
| |
| |
| | |
to stop the federationhandler trying to do master stuff
|
| |
| |
| |
| |
| | |
When we get an invite over federation, store the room version in the rooms table.
The general idea here is that, when we pull the invite out again, we'll want to know what room_version it belongs to (so that we can later redact it if need be). So we need to store it somewhere...
|
| | |
|
| |
| |
| |
| | |
handling of call to deactivate user (#6990)
|
| | |
|
| |
| |
| |
| | |
Ensure good comprehension hygiene using flake8-comprehensions.
|
|/
|
|
|
|
| |
The state res v2 algorithm only cares about the difference between auth
chains, so we can pass in the known common state to the `get_auth_chain`
storage function so that it can ignore those events.
|
|\
| |
| | |
Make room alias lists peekable
|
| |
| |
| |
| |
| |
| | |
As per
https://github.com/matrix-org/matrix-doc/pull/2432#pullrequestreview-360566830,
make room alias lists accessible to users outside world_readable rooms.
|
| |
| |
| |
| |
| | |
these were getting a bit unwieldy, so let's combine `check_joined_room` and
`check_user_was_in_room` into a single `check_user_in_room`.
|
|/
|
| |
it's not in the spec yet, so needs to be unstable. Also add a feature flag for it. Also add a test for admin users.
|
|
|
|
|
| |
per matrix-org/matrix-doc#2432
|
|\
| |
| | |
Rewrite _EventInternalMetadata to back it with a dict
|
| |
| |
| |
| |
| | |
this amounts to the same thing, but replaces `_event_dict` with `_dict`, and
removes some of the function layers generated by `property`.
|
| |
| |
| | |
Stop sending events when creating or deleting associations (room aliases). Send an updated canonical alias event if one of the alt_aliases is deleted.
|
| | |
|
|/
|
| |
Convert directory handler tests to use HomeserverTestCase.
|
|
|
| |
Add a method to the spam checker to filter the user directory results.
|
| |
|
|
|
|
|
|
|
|
| |
* Reject device display names that are too long.
Too long is currently defined as 100 characters in length.
* Add a regression test for rejecting a too long device display name.
|
|
|
|
|
|
|
| |
... and use it in places where it's trivial to do so.
This will make it easier to pass room versions into the FrozenEvent
constructors.
|
| |
|
| |
|
|
|
|
| |
It's called from all over the shop, so this one's a bit messy.
|
|
|
|
| |
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
|
|\
| |
| | |
Pass room_version into add_hashes_and_signatures
|
| | |
|
| | |
|
|/
|
|
|
| |
... to make way for a forthcoming get_room_version which returns a RoomVersion
object.
|
|
|
|
|
| |
as per MSC2260
|
| |
|
|
|
|
|
|
|
|
|
|
| |
* Bump signedjson to 1.1
... so that we can use the type definitions
* Fix breakage caused by upgrade to signedjson 1.1
Thanks, @illicitonion...
|
|
|
|
| |
I'm going to need another copy (hah!) of this.
|
|
|
|
|
|
|
| |
These are easier to work with than the strings and we normally have one around.
This fixes `FederationHander._persist_auth_tree` which was passing a
RoomVersion object into event_auth.check instead of a string.
|
|
|
| |
This is so that we don't have to rely on pulling it out from `current_state_events` table.
|
| |
|
|
|
|
|
|
| |
There are quite a few places that we assume that a redaction event has a
corresponding `redacts` key, which is not always the case. So lets
cheekily make it so that event.redacts just returns None instead.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
... since the whole response is huge.
We even need to break up the assertions, since kibana otherwise truncates them.
|
|
|
|
|
|
|
|
|
|
| |
* Port synapse.replication.tcp to async/await
* Newsfile
* Correctly document type of on_<FOO> functions as async
* Don't be overenthusiastic with the asyncing....
|
| |
|
|
|
|
|
| |
Allow REST endpoint implemnentations to raise a RedirectException, which will
redirect the user's browser to a given location.
|
|
|
|
|
|
|
| |
Currently we rely on `current_state_events` to figure out what rooms a
user was in and their last membership event in there. However, if the
server leaves the room then the table may be cleaned up and that
information is lost. So lets add a table that separately holds that
information.
|
| |
|
|
|
| |
This is pretty pointless. Let's just use SynapseError.
|
|
|
| |
Signed-off-by: Manuel Stahl <manuel.stahl@awesome-technologies.de>
|
|
|
|
| |
Fixes #6552
|
|
|
|
|
|
|
|
|
|
|
| |
This was ill-advised. We can't modify verify_keys here, because the response
object has already been signed by the requested key.
Furthermore, it's somewhat unnecessary because existing versions of Synapse
(which get upset that the notary key isn't present in verify_keys) will fall
back to a direct fetch via `/key/v2/server`.
Also: more tests for fetching keys via perspectives: it would be nice if we actually tested when our fetcher can't talk to our notary impl.
|
| |
|
|\
| |
| | |
Remove a bunch of unused code from event creation
|
| | |
|
| | |
|
| |
| |
| |
| | |
... to make way for a new method which just returns the event ids
|
| |
| |
| |
| |
| |
| | |
Lift the restriction that *all* the keys used for signing v2 key responses be
present in verify_keys.
Fixes #6596.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Kill off redundant SynapseRequestFactory
We already get the Site via the Channel, so there's no need for a dedicated
RequestFactory: we can just use the right constructor.
* Workaround for error when fetching notary's own key
As a notary server, when we return our own keys, include all of our signing
keys in verify_keys.
This is a workaround for #6596.
|
|
|
|
| |
We already get the Site via the Channel, so there's no need for a dedicated
RequestFactory: we can just use the right constructor.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove redundant python2 support code
`str.decode()` doesn't exist on python3, so presumably this code was doing
nothing
* Filter out pushers with corrupt data
When we get a row with unparsable json, drop the row, rather than returning a
row with null `data`, which will then cause an explosion later on.
* Improve logging when we can't start a pusher
Log the ID to help us understand the problem
* Make email pusher setup more robust
We know we'll have a `data` member, since that comes from the database. What we
*don't* know is if that is a dict, and if that has a `brand` member, and if
that member is a string.
|
|
|
|
|
| |
This encapsulates config for a given database and is the way to get new
connections.
|
|\ |
|
| |\
| | |
| | | |
Use the filtered version of an event when responding to /context requests for that event
|
| | | |
|
| | | |
|
| | | |
|
| |/
| |
| |
| |
| | |
When we perform state resolution, check that all of the events involved are in
the right room.
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
When we perform state resolution, check that all of the events involved are in
the right room.
|
| | |
|
|\ \
| | |
| | | |
Move database config from apps into HomeServer object
|
| |/ |
|
|\ \
| | |
| | | |
Port some of FederationHandler to async/await
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
and associated functions:
* on_receive_pdu
* handle_queued_pdus
* get_missing_events_for_pdu
|
|\ \ \
| |/ /
|/| | |
Port handlers.account_validity to async/await.
|
| |/ |
|
| |
| |
| |
| | |
Stop the `update_client_ips` background job from recreating deleted devices.
|
|\ \
| | |
| | | |
Fix `make_deferred_yieldable` to work with coroutines
|
| |/ |
|
|\ \
| |/
|/| |
Remove SnapshotCache in favour of ResponseCache
|
| | |
|
| |
| |
| | |
Back out cross-signing code added in Synapse 1.5.0, which caused a performance regression.
|
|\ \
| |/
|/| |
Pass in Database object to data stores.
|
| | |
|
| | |
|
|\ \
| |/
|/| |
Port SyncHandler to async/await
|
| | |
|
| | |
|
|\|
| |
| |
| | |
erikj/make_database_class
|
| | |
|
| | |
|
|/ |
|
| |
|
|\
| |
| | |
Filter state, events_before and events_after in /context requests
|
| | |
|
| |\
| | |
| | |
| | | |
into babolivier/context_filters
|
| | |\ |
|
| |\ \ \
| | |/ /
| |/| | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
Ensure that the the default settings for the room directory are that the it is hidden from public view by default.
|
| |/ /
|/| |
| | |
| | |
| | |
| | |
| | |
| | | |
Implement part [MSC2228](https://github.com/matrix-org/matrix-doc/pull/2228). The parts that differ are:
* the feature is hidden behind a configuration flag (`enable_ephemeral_messages`)
* self-destruction doesn't happen for state events
* only implement support for the `m.self_destruct_after` field (not the `m.self_destruct` one)
* doesn't send synthetic redactions to clients because for this specific case we consider the clients to be able to destroy an event themselves, instead we just censor it (by pruning its JSON) in the database
|
| | | |
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
Implement message retention policies (MSC1763)
|
| |\ \ \
| | | |/
| | |/| |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
As per MSC1763, 'Retention is only considered for non-state events.', so don't filter out state events based on the room's retention policy.
|
| | | | |
|
| |/ /
|/| |
| | |
| | | |
public_baseurl (#6379)
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
Split purge API into events vs state and add PurgeEventsStorage
|
| | | |
| | | |
| | | |
| | | | |
And fix the tests to actually test that things got deleted.
|
| |\ \ \
| | | |/
| | |/|
| | | | |
erikj/split_purge_history
|
| |\ \ \
| | | |/
| | |/|
| | | | |
erikj/split_purge_history
|
| |\ \ \
| | | | |
| | | | |
| | | | | |
erikj/split_purge_history
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
(#6320)
Fixes a bug where rejected events were persisted with the wrong state group.
Also fixes an occasional internal-server-error when receiving events over
federation which are rejected and (possibly because they are
backwards-extremities) have no prev_group.
Fixes #6289.
|
|\ \ \ \ \
| | |_|_|/
| |/| | | |
|
| | |_|/
| |/| |
| | | | |
* remove psutil and replace with resource
|
| |\ \ \
| | | | |
| | | | | |
Implement MSC2326 (label based filtering)
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The intention here is to make it clearer which fields we can expect to be
populated when: notably, that the _event_type etc aren't used for the
synchronous impl of EventContext.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The `http_proxy` and `HTTPS_PROXY` env vars can be set to a `host[:port]` value which should point to a proxy.
The address of the proxy should be excluded from IP blacklists such as the `url_preview_ip_range_blacklist`.
The proxy will then be used for
* push
* url previews
* phone-home stats
* recaptcha validation
* CAS auth validation
It will *not* be used for:
* Application Services
* Identity servers
* Outbound federation
* In worker configurations, connections from workers to masters
Fixes #4198.
|
| |\ \ \ \
| | | |_|/
| | |/| | |
|
| | |\ \ \
| | | | | |
| | | | | | |
Add StateGroupStorage interface
|
| | | | |/
| | | |/| |
|
| | |/ / |
|
| | | | |
|
| |\| | |
|
| | |\ \
| | | | |
| | | | |
| | | | | |
erikj/split_out_persistence_store
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This makes it easier to use in an async/await world.
Also fixes a bug where cache descriptors would occaisonally return a raw
value rather than a deferred.
|
| | | |/ |
|
| | | | |
|
| | | |\
| | | | |
| | | | | |
delete keys when deleting backup versions
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| |\ \ \ \
| | | |_|/
| | |/| | |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
Co-Authored-By: Brendan Abolivier <babolivier@matrix.org>
Co-Authored-By: Erik Johnston <erik@matrix.org>
|
| |_|_|/
|/| | |
| | | |
| | | | |
... to stop people causing DoSes with malicious web pages
|
| |/ /
|/| | |
|
| |/
|/|
| |
| | |
The expected use case is to suppress MAU limiting on small instances
|
|\|
| |
| |
| | |
erikj/refactor_stores
|
| |\ |
|
| |\ \ |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
|\ \ \ \
| | |_|/
| |/| |
| | | | |
erikj/refactor_stores
|
| |\ \ \ |
|
| |\ \ \ \
| | |_|/ /
| |/| | /
| | | |/
| | |/| |
|
| |\ \ \ |
|