| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
When receiving a /send_join request for a room with join rules set to 'restricted',
check if the user is a member of the spaces defined in the 'allow' key of the join rules.
This only applies to an experimental room version, as defined in MSC3083.
|
| |
|
|
|
|
|
|
| |
handler (#9800)
This refactoring allows adding logic that uses the event context
before persisting it.
|
|
|
|
|
|
|
|
| |
room. (#9763)"
This reverts commit cc51aaaa7adb0ec2235e027b5184ebda9b660ec4.
The PR was prematurely merged and not yet approved.
|
|
|
|
|
|
|
| |
When receiving a /send_join request for a room with join rules set to 'restricted',
check if the user is a member of the spaces defined in the 'allow' key of the join
rules.
This only applies to an experimental room version, as defined in MSC3083.
|
|
|
|
|
|
|
| |
Part of #9744
Removes all redundant `# -*- coding: utf-8 -*-` lines from files, as python 3 automatically reads source code as utf-8 now.
`Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>`
|
|
|
|
|
|
|
| |
Part of #9366
Adds in fixes for B006 and B008, both relating to mutable parameter lint errors.
Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>
|
| |
|
|
|
|
|
|
|
| |
This should fix a class of bug where we forget to check if e.g. the appservice shouldn't be ratelimited.
We also check the `ratelimit_override` table to check if the user has ratelimiting disabled. That table is really only meant to override the event sender ratelimiting, so we don't use any values from it (as they might not make sense for different rate limits), but we do infer that if ratelimiting is disabled for the user we should disabled all ratelimits.
Fixes #9663
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Background: When we receive incoming federation traffic, and notice that we are missing prev_events from
the incoming traffic, first we do a `/get_missing_events` request, and then if we still have missing prev_events,
we set up new backwards-extremities. To do that, we need to make a `/state_ids` request to ask the remote
server for the state at those prev_events, and then we may need to then ask the remote server for any events
in that state which we don't already have, as well as the auth events for those missing state events, so that we
can auth them.
This PR attempts to optimise the processing of that state request. The `state_ids` API returns a list of the state
events, as well as a list of all the auth events for *all* of those state events. The optimisation comes from the
observation that we are currently loading all of those auth events into memory at the start of the operation, but
we almost certainly aren't going to need *all* of the auth events. Rather, we can check that we have them, and
leave the actual load into memory for later. (Ideally the federation API would tell us which auth events we're
actually going to need, but it doesn't.)
The effect of this is to reduce the number of events that I need to load for an event in Matrix HQ from about
60000 to about 22000, which means it can stay in my in-memory cache, whereas previously the sheer number
of events meant that all 60K events had to be loaded from db for each request, due to the amount of cache
churn. (NB I've already tripled the size of the cache from its default of 10K).
Unfortunately I've ended up basically C&Ping `_get_state_for_room` and `_get_events_from_store_or_dest` into
a new method, because `_get_state_for_room` is also called during backfill, which expects the auth events to be
returned, so the same tricks don't work. That said, I don't really know why that codepath is completely different
(ultimately we're doing the same thing in setting up a new backwards extremity) so I've left a TODO suggesting
that we clean it up.
|
|
|
| |
Put the room id in the logcontext, to make it easier to understand what's going on.
|
|
|
|
| |
This uses a simplified version of get_chain_cover_difference to calculate
auth chain of events.
|
|
|
|
|
|
|
| |
- Update black version to the latest
- Run black auto formatting over the codebase
- Run autoformatting according to [`docs/code_style.md
`](https://github.com/matrix-org/synapse/blob/80d6dc9783aa80886a133756028984dbf8920168/docs/code_style.md)
- Update `code_style.md` docs around installing black to use the correct version
|
|
|
|
|
| |
This PR removes a set that was created and [initially used](https://github.com/matrix-org/synapse/commit/1d2a0040cff8d04cdc7d7d09d8f04a5d628fa9dd#diff-0bc92da3d703202f5b9be2d3f845e375f5b1a6bc6ba61705a8af9be1121f5e42R435-R436), but is no longer today.
May help cut down a bit on the time it takes to accept invites.
|
| |
|
| |
|
| |
|
|
|
|
| |
Spam checker modules can now provide async methods. This is implemented
in a backwards-compatible manner.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replaces the `federation_ip_range_blacklist` configuration setting with an
`ip_range_blacklist` setting with wider scope. It now applies to:
* Federation
* Identity servers
* Push notifications
* Checking key validitity for third-party invite events
The old `federation_ip_range_blacklist` setting is still honored if present, but
with reduced scope (it only applies to federation and identity servers).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Consistently use room_id from federation request body
Some federation APIs have a redundant `room_id` path param (see
https://github.com/matrix-org/matrix-doc/issues/2330). We should make sure we
consistently use either the path param or the body param, and the body param is
easier.
* Kill off some references to "context"
Once upon a time, "rooms" were known as "contexts". I think this kills of the
last references to "contexts".
|
|
|
|
|
|
|
|
|
| |
There's a handy function called maybe_store_room_on_invite which allows us to create an entry in the rooms table for a room and its version for which we aren't joined to yet, but we can reference when ingesting events about.
This is currently used for invites where we receive some stripped state about the room and pass it down via /sync to the client, without us being in the room yet.
There is a similar requirement for knocking, where we will eventually do the same thing, and need an entry in the rooms table as well. Thus, reusing this function works, however its name needs to be generalised a bit.
Separated out from #6739.
|
| |
|
|
|
|
|
| |
Rather than waiting until we handle the event, call the ThirdPartyRules check
when we fist create the event.
|
|
|
|
|
| |
There's not much point in calling these *after* we have decided to accept them
into the DAG.
|
|
|
|
|
| |
(#8476)
Should fix #3365.
|
|
|
|
|
|
|
|
| |
There's no need for it to be in the dict as well as the events table. Instead,
we store it in a separate attribute in the EventInternalMetadata object, and
populate that on load.
This means that we can rely on it being correctly populated for any event which
has been persited to the database.
|
| |
|
|
|
| |
For some reason, an apparently unrelated PR upset mypy about this module. Here are a number of little fixes.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea is to remove some of the places we pass around `int`, where it can represent one of two things:
1. the position of an event in the stream; or
2. a token that partitions the stream, used as part of the stream tokens.
The valid operations are then:
1. did a position happen before or after a token;
2. get all events that happened before or after a token; and
3. get all events between two tokens.
(Note that we don't want to allow other operations as we want to change the tokens to be vector clocks rather than simple ints)
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.20.0rc5 (2020-09-18)
==============================
In addition to the below, Synapse 1.20.0rc5 also includes the bug fix that was included in 1.19.3.
Features
--------
- Add flags to the `/versions` endpoint for whether new rooms default to using E2EE. ([\#8343](https://github.com/matrix-org/synapse/issues/8343))
Bugfixes
--------
- Fix rate limiting of federation `/send` requests. ([\#8342](https://github.com/matrix-org/synapse/issues/8342))
- Fix a longstanding bug where back pagination over federation could get stuck if it failed to handle a received event. ([\#8349](https://github.com/matrix-org/synapse/issues/8349))
Internal Changes
----------------
- Blacklist [MSC2753](https://github.com/matrix-org/matrix-doc/pull/2753) SyTests until it is implemented. ([\#8285](https://github.com/matrix-org/synapse/issues/8285))
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of just using the most recent extremities let's pick the
ones that will give us results that the pagination request cares about,
i.e. pick extremities only if they have a smaller depth than the
pagination token.
This is useful when we fail to backfill an extremity, as we no longer
get stuck requesting that same extremity repeatedly.
|
| |
| |
| |
| |
| |
| |
| | |
This converts calls like super(Foo, self) -> super().
Generated with:
sed -i "" -Ee 's/super\([^\(]+\)/super()/g' **/*.py
|
| |
| |
| |
| |
| | |
slots use less memory (and attribute access is faster) while slightly
limiting the flexibility of the class attributes. This focuses on objects
which are instantiated "often" and for short periods of time.
|
| |
| |
| |
| |
| |
| | |
This is *not* ready for production yet. Caveats:
1. We should write some tests...
2. The stream token that we use for events can get stalled at the minimum position of all writers. This means that new events may not be processed and e.g. sent down sync streams if a writer isn't writing or is slow.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The idea here is that we pass the `max_stream_id` to everything, and only use the stream ID of the particular event to figure out *when* the max stream position has caught up to the event and we can notify people about it.
This is to maintain the distinction between the position of an item in the stream (i.e. event A has stream ID 513) and a token that can be used to partition the stream (i.e. give me all events after stream ID 352). This distinction becomes important when the tokens are more complicated than a single number, which they will be once we start tracking the position of multiple writers in the tokens.
The valid operations here are:
1. Is a position before or after a token
2. Fetching all events between two tokens
3. Merging multiple tokens to get the "max", i.e. `C = max(A, B)` means that for all positions P where P is before A *or* before B, then P is before C.
Future PR will change the token type to a dedicated type.
|
| |
| |
| |
| |
| | |
Removes the `user_joined_room` and stops calling it since there are no observers.
Also cleans-up some other unused signals and related code.
|
| |
| |
| |
| |
| | |
`pusher_pool.on_new_notifications` expected a min and max stream ID, however that was not what we were passing in. Instead, let's just pass it the current max stream ID and have it track the last stream ID it got passed.
I believe that it mostly worked as we called the function for every event. However, it would break for events that got persisted out of order, i.e, that were persisted but the max stream ID wasn't incremented as not all preceding events had finished persisting, and push for that event would be delayed until another event got pushed to the effected users.
|
| |
| |
| |
| | |
This reverts commit e7fd336a53a4ca489cdafc389b494d5477019dc0.
|
|/ |
|
|
|
|
|
|
|
| |
* Revert "Add experimental support for sharding event persister. (#8170)"
This reverts commit 82c1ee1c22a87b9e6e3179947014b0f11c0a1ac3.
* Changelog
|
|
|
| |
This requires adding a mypy plugin to fiddle with the type signatures a bit.
|
|
|
|
|
|
| |
This is *not* ready for production yet. Caveats:
1. We should write some tests...
2. The stream token that we use for events can get stalled at the minimum position of all writers. This means that new events may not be processed and e.g. sent down sync streams if a writer isn't writing or is slow.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
We shouldn't allow others to make_join through us if we've left the room;
reject such attempts with a 404.
Fixes #7835. Fixes #6958.
|
|
|
|
|
|
| |
The replication client requires that arguments are given as keyword
arguments, which was not done in this case. We also pull out the logic
so that we can catch and handle any exceptions raised, rather than
leaving them unhandled.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When fetching the state of a room over federation we receive the event
IDs of the state and auth chain. We then fetch those events that we
don't already have.
However, we used a function that recursively fetched any missing auth
events for the fetched events, which can lead to a lot of recursion if
the server is missing most of the auth chain. This work is entirely
pointless because would have queued up the missing events in the auth
chain to be fetched already.
Let's just diable the recursion, since it only gets called from one
place anyway.
|
|
|
| |
... instead of duplicating `config.signing_key[0]` everywhere
|
|\ |
|
| | |
|
| |
| |
| |
| | |
my editor was complaining about unset variables, so let's add some early
returns to fix that and reduce indentation/cognitive load.
|
| |
| |
| | |
fix a few things to make this pass mypy.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
State res v2 across large data sets can be very CPU intensive, and if
all the relevant events are in the cache the algorithm will run from
start to finish within a single reactor tick. This can result in
blocking the reactor tick for several seconds, which can have major
repercussions on other requests.
To fix this we simply add the occaisonal `sleep(0)` during iterations to
yield execution until the next reactor tick. The aim is to only do this
for large data sets so that we don't impact otherwise quick resolutions.=
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
|
| |
Fixes https://github.com/matrix-org/synapse/issues/2431
Adds config option `encryption_enabled_by_default_for_room_type`, which determines whether encryption should be enabled with the default encryption algorithm in private or public rooms upon creation. Whether the room is private or public is decided based upon the room creation preset that is used.
Part of this PR is also pulling out all of the individual instances of `m.megolm.v1.aes-sha2` into a constant variable to eliminate typos ala https://github.com/matrix-org/synapse/pull/7637
Based on #7637
|
|
|
| |
We already caught some exceptions, but not all.
|
| |
|
|
|
|
|
|
|
| |
The idea here is that if an instance persists an event via the replication HTTP API it can return before we receive that event over replication, which can lead to races where code assumes that persisting an event immediately updates various caches (e.g. current state of the room).
Most of Synapse doesn't hit such races, so we don't do the waiting automagically, instead we do so where necessary to avoid unnecessary delays. We may decide to change our minds here if it turns out there are a lot of subtle races going on.
People probably want to look at this commit by commit.
|
|
|
|
| |
These are business as usual errors, rather than stuff we want to log at
error.
|
| |
|
| |
|
| |
|
|
|
|
| |
make sure we clear out all but one update for the user
|
|
|
|
|
| |
When we get an invite over federation, store the room version in the rooms table.
The general idea here is that, when we pull the invite out again, we'll want to know what room_version it belongs to (so that we can later redact it if need be). So we need to store it somewhere...
|
|
|
|
|
| |
`_process_received_pdu` is only called by `on_receive_pdu`, which ignores any
events for unknown rooms, so this is redundant.
|
|
|
|
|
|
|
|
| |
This is intended as a precursor to storing room versions when we receive an
invite over federation, but has the happy side-effect of fixing #3374 at last.
In short: change the store_room with try/except to a proper upsert which
updates the right columns.
|
|
|
|
| |
Ensure good comprehension hygiene using flake8-comprehensions.
|
|
|
| |
Limit the maximum number of events requested when backfilling events.
|
|
|
|
| |
... which allows us to sanity-check the create event.
|
|\
| |
| | |
pass room versions around
|
| | |
|
|\ \
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.10.0rc2 (2020-02-06)
==============================
Bugfixes
--------
- Fix an issue with cross-signing where device signatures were not sent to remote servers. ([\#6844](https://github.com/matrix-org/synapse/issues/6844))
- Fix to the unknown remote device detection which was introduced in 1.10.rc1. ([\#6848](https://github.com/matrix-org/synapse/issues/6848))
Internal Changes
----------------
- Detect unexpected sender keys on remote encrypted events and resync device lists. ([\#6850](https://github.com/matrix-org/synapse/issues/6850))
|
| |
| |
| | |
If they don't then the device lists are probably out of sync.
|
| |
| |
| |
| |
| |
| |
| |
| | |
We were looking at the wrong event type (`m.room.encryption` vs
`m.room.encrypted`).
Also fixup the duplicate `EvenTypes` entries.
Introduced in #6776.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|/ |
|
| |
|
|\
| |
| | |
Make `get_room_version` return a RoomVersion object
|
| |
| |
| |
| |
| | |
... to make way for a forthcoming get_room_version which returns a RoomVersion
object.
|
|/ |
|
| |
|
| |
|
|
|
|
| |
We just mark the fact that the cache may be stale in the database for
now.
|
|
|
|
|
|
|
| |
These are easier to work with than the strings and we normally have one around.
This fixes `FederationHander._persist_auth_tree` which was passing a
RoomVersion object into event_auth.check instead of a string.
|
|
|
| |
This is so that we don't have to rely on pulling it out from `current_state_events` table.
|
| |
|
|
|
|
| |
This could result in Synapse not fetching prev_events for new events in the room if it has missed some events.
|
|\ |
|
| |
| |
| | |
Fixes #6575
|
| | |
|
|\| |
|
| |
| |
| |
| |
| | |
(#6527)
This fixes a weird bug where, if you were determined enough, you could end up with a rejected event forming part of the state at a backwards-extremity. Authing that backwards extrem would then lead to us trying to pull the rejected event from the db (with allow_rejected=False), which would fail with a 404.
|
| |
| |
| |
| | |
The main point here is to make sure that the state returned by _get_state_in_room has been authed before we try to use it as state in the room.
|
| |
| |
| |
| |
| | |
When we perform state resolution, check that all of the events involved are in
the right room.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When we request the state/auth_events to populate a backwards extremity (on
backfill or in the case of missing events in a transaction push), we should
check that the returned events are in the right room rather than blindly using
them in the room state or auth chain.
Given that _get_events_from_store_or_dest takes a room_id, it seems clear that
it should be sanity-checking the room_id of the requested events, so let's do
it there.
|
| |
| |
| |
| |
| |
| | |
Make it return the state *after* the requested event, rather than the one
before it. This is a bit easier and requires fewer calls to
get_events_from_store_or_dest.
|
| |
| |
| |
| |
| | |
This is a non-functional refactor as a precursor to some other work.
|
| |
| |
| |
| |
| | |
(#6527)
This fixes a weird bug where, if you were determined enough, you could end up with a rejected event forming part of the state at a backwards-extremity. Authing that backwards extrem would then lead to us trying to pull the rejected event from the db (with allow_rejected=False), which would fail with a 404.
|
| |
| |
| | |
The main point here is to make sure that the state returned by _get_state_in_room has been authed before we try to use it as state in the room.
|
| |
| |
| |
| |
| |
| |
| | |
When we perform state resolution, check that all of the events involved are in
the right room.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When we request the state/auth_events to populate a backwards extremity (on
backfill or in the case of missing events in a transaction push), we should
check that the returned events are in the right room rather than blindly using
them in the room state or auth chain.
Given that _get_events_from_store_or_dest takes a room_id, it seems clear that
it should be sanity-checking the room_id of the requested events, so let's do
it there.
|
| |
| |
| |
| |
| | |
Make it return the state *after* the requested event, rather than the one
before it. This is a bit easier and requires fewer calls to
get_events_from_store_or_dest.
|
| |
| |
| |
| | |
also fix user_joined_room to consistently return deferreds
|
| |
| |
| |
| | |
... and _get_events_from_store_or_dest
|
| |
| |
| |
| |
| |
| |
| | |
and associated functions:
* on_receive_pdu
* handle_queued_pdus
* get_missing_events_for_pdu
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
PaginationHandler.get_messages is only called by RoomMessageListRestServlet,
which is async.
Chase the code path down from there:
- FederationHandler.maybe_backfill (and nested try_backfill)
- FederationHandler.backfill
|
| |
| |
| |
| | |
This just makes some of the logging easier to follow when things start going
wrong.
|
| | |
|
| | |
|
|/
|
|
|
| |
This is a non-functional refactor as a precursor to some other work.
|
|
|
|
|
| |
replace the event_info dict with an attrs thing
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_update_auth_events_and_context_for_auth (#6468)
have_events was a map from event_id to rejection reason (or None) for events
which are in our local database. It was used as filter on the list of
event_ids being passed into get_events_as_list. However, since
get_events_as_list will ignore any event_ids that are unknown or rejected, we
can equivalently just leave it to get_events_as_list to do the filtering.
That means that we don't have to keep `have_events` up-to-date, and can use
`have_seen_events` instead of `get_seen_events_with_rejection` in the one place
we do need it.
|
|
|
|
|
|
|
|
| |
Implement part [MSC2228](https://github.com/matrix-org/matrix-doc/pull/2228). The parts that differ are:
* the feature is hidden behind a configuration flag (`enable_ephemeral_messages`)
* self-destruction doesn't happen for state events
* only implement support for the `m.self_destruct_after` field (not the `m.self_destruct` one)
* doesn't send synthetic redactions to clients because for this specific case we consider the clients to be able to destroy an event themselves, instead we just censor it (by pruning its JSON) in the database
|
| |
|
|\
| |
| | |
Implement message retention policies (MSC1763)
|
| |\ |
|
| | | |
|
| | | |
|
| | | |
|
|\ \ \
| | |/
| |/| |
|
| | | |
|
| | |
| | |
| | |
| | | |
It's more efficient and clearer.
|
| | | |
|
|/ /
| |
| |
| |
| | |
move event_key calculation into _update_context_for_auth_events, since it's
only used there.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
(#6320)
Fixes a bug where rejected events were persisted with the wrong state group.
Also fixes an occasional internal-server-error when receiving events over
federation which are rejected and (possibly because they are
backwards-extremities) have no prev_group.
Fixes #6289.
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
* Raise an exception if accessing state for rejected events
Add some sanity checks on accessing state_group etc for
rejected events.
* Skip calculating push actions for rejected events
It didn't actually cause any bugs, because rejected events get filtered out at
various later points, but there's not point in trying to calculate the push
actions for a rejected event.
|
|
|
|
|
|
| |
The intention here is to make it clearer which fields we can expect to be
populated when: notably, that the _event_type etc aren't used for the
synchronous impl of EventContext.
|
| |
|
|
|
| |
* update version of black and also fix the mypy config being overridden
|
|\
| |
| | |
Add StateGroupStorage interface
|
| | |
|
|/
|
| |
Replace every instance of `logger.warn` with `logger.warning` as the former is deprecated.
|
|\
| |
| |
| | |
erikj/split_out_persistence_store
|
| | |
|
| |
| |
| |
| |
| | |
Make sure that we check that events sent over /send_join, /send_leave, and
/invite, are correctly signed and come from the expected servers.
|
|/ |
|
|
|
| |
This method was somewhat redundant, and confusing.
|
|
|
|
| |
The only possible rejection reason is AUTH_ERROR, so all of this is unreachable.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
While this is not documented in the spec (but should be), Riot (and other clients) revoke 3PID invites by sending a m.room.third_party_invite event with an empty ({}) content to the room's state.
When the invited 3PID gets associated with a MXID, the identity server (which doesn't know about revocations) sends down to the MXID's homeserver all of the undelivered invites it has for this 3PID. The homeserver then tries to talk to the inviting homeserver in order to exchange these invite for m.room.member events.
When one of the invite is revoked, the inviting homeserver responds with a 500 error because it tries to extract a 'display_name' property from the content, which is empty. This might cause the invited server to consider that the server is down and not try to exchange other, valid invites (or at least delay it).
This fix handles the case of revoked invites by avoiding trying to fetch a 'display_name' from the original invite's content, and letting the m.room.member event fail the auth rules (because, since the original invite's content is empty, it doesn't have public keys), which results in sending a 403 with the correct error message to the invited server.
|
|
|
|
|
|
|
| |
params to docstring (#6010)
Another small fixup noticed during work on a larger PR. The `origin` field of `add_display_name_to_third_party_invite` is not used and likely was just carried over from the `on_PUT` method of `FederationThirdPartyInviteExchangeServlet` which, like all other servlets, provides an `origin` argument.
Since it's not used anywhere in the handler function though, we should remove it from the function arguments.
|
|
|
|
|
| |
Python will return a tuple whether there are parentheses around the returned values or not.
I'm just sick of my editor complaining about this all over the place :)
|
| |
|
|\
| |
| | |
Handle RequestSendFailed exception correctly in more places.
|
| | |
|
|/ |
|
|\
| |
| | |
Log when we receive a /make_* request from a different origin
|
| | |
|
|/ |
|
| |
|
| |
|
|\
| |
| | |
Handle the case of `get_missing_events` failing
|
| | |
|
| |\
| | |
| | |
| | | |
erikj/fix_get_missing_events_error
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Currently if a call to `/get_missing_events` fails we log an exception
and stop processing the top level event we received over federation.
Instead let's try and handle it sensibly given it is a somewhat expected
failure mode.
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I had to add quite a lot of logging to diagnose a problem with 3pid
invites - we only logged the one failure which isn't all that
informative.
NB. I'm not convinced the logic of this loop is right: I think it
should just accept a single valid signature from a trusted source
rather than fail if *any* signature is invalid. Also it should
probably not skip the rest of middle loop if a check fails? However,
I'm deliberately not changing the logic here.
|
|\ \
| | |
| | | |
Fix 3PID invite room state over federation.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixes that when a user exchanges a 3PID invite for a proper invite over
federation it does not include the `invite_room_state` key.
This was due to synapse incorrectly sending out two invite requests.
|
|/ / |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When processing an incoming event over federation, we may try and
resolve any unexpected differences in auth events. This is a
non-essential process and so should not stop the processing of the event
if it fails (e.g. due to the remote disappearing or not implementing the
necessary endpoints).
Fixes #3330
|
| |
| |
| |
| |
| | |
I was staring at this function trying to figure out wtf it was actually
doing. This is (hopefully) a non-functional refactor which makes it a bit
clearer.
|
|/
|
|
|
|
|
|
| |
When considering the candidates to be forward-extremities, we must exclude soft
failures.
Hopefully fixes #5090.
|
|
|
|
| |
Collect all the things that make room-versions different to one another into
one place, so that it's easier to define new room versions.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
When filtering events to send to server we check more than just history
visibility. However when deciding whether to backfill or not we only
care about the history visibility.
|
| |
|
|\
| |
| |
| | |
erikj/stop_fed_not_in_room
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|/ |
|
| |
|
|
|
|
|
|
| |
The transaction queue only sends out events that we generate. This was
done by checking domain of event ID, but that can no longer be used.
Instead, we may as well use the sender field.
|
|
|
|
|
|
|
|
| |
We currently pass FrozenEvent instead of `dict` to
`compute_event_signature`, which works by accident due to `dict(event)`
producing the correct result.
This fixes PR #4493 commit 855a151
|
|\
| |
| | |
Split up event validation between event and builder
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The validator was being run on the EventBuilder objects, and so the
validator only checked a subset of fields. With the upcoming
EventBuilder refactor even fewer fields will be there to validate.
To get around this we split the validation into those that can be run
against an EventBuilder and those run against a fully fledged event.
|
|/ |
|
| |
|
|\ |
|
| |\
| | |
| | | |
Add room_version param to get_pdu
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
When we add new event format we'll need to know the event format or room
version when parsing events.
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently they're stored as non-outliers even though the server isn't in
the room, which can be problematic in places where the code assumes it
has the state for all non outlier events.
In particular, there is an edge case where persisting the leave event
triggers a state resolution, which requires looking up the room version
from state. Since the server doesn't have the state, this causes an
exception to be thrown.
|
|/
|
|
|
| |
We also implement `make_membership_event` converting the returned
room version to an event format version.
|
| |
|
| |
|
|
|
|
|
|
|
| |
* Add helpers for getting prev and auth events
This is in preparation for allowing the event format to change between
room versions.
|
|\
| |
| | |
Add v2 state resolution algorithm
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
`on_new_notifications` and `on_new_receipts` in `HttpPusher` and `EmailPusher`
now always return synchronously, so we can remove the `defer.gatherResults` on
their results, and the `run_as_background_process` wrappers can be removed too
because the PusherPool methods will now complete quickly enough.
|
|/
|
|
|
|
|
|
|
| |
It's quite important that get_missing_events returns the *latest* events in the
room; however we were pulling event ids out of the database until we got *at
least* 10, and then taking the *earliest* of the results.
We also shouldn't really be relying on depth, and should be checking the
room_id.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
If we have a forward extremity for a room as `E`, and you receive `A`, `B`,
s.t. `A -> B -> E`, and `B` also points to an unknown event `X`, then we need
to do state res between `X` and `E`.
When that happens, we need to make sure we include `X` in the state that goes
into the state res alg.
Fixes #3934.
|
|
|
|
|
|
|
|
| |
If we've fetched state events from remote servers in order to resolve the state
for a new event, we need to actually pass those events into
resolve_events_with_factory (so that it can do the state res) and then persist
the ones we need - otherwise other bits of the codebase get confused about why
we have state groups pointing to non-existent events.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
get_state_groups returns a map from state_group_id to a list of FrozenEvents,
so was very much the wrong thing to be putting as one of the entries in the
list passed to resolve_events_with_factory (which expects maps from
(event_type, state_key) to event id).
We actually want get_state_groups_ids().values() rather than
get_state_groups().
This fixes the main problem in #3923, but there are other problems with this
bit of code which get discovered once you do so.
|
| |
|
|
|
|
|
|
|
| |
* add some comments on things that look a bit bogus
* rename this `state` variable to avoid confusion with the `state` used
elsewhere in this function. (There was no actual conflict, but it was
a confusing bit of spaghetti.)
|
|\
| |
| | |
Logging improvements
|
| |
| |
| |
| | |
Some logging tweaks to help with debugging incoming federation transactions
|