| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
This PR allows `ThirdPartyEventRules` modules to view, manipulate and block changes to the state of whether a room is published in the public rooms directory.
While the idea of whether a room is in the public rooms list is not kept within an event in the room, `ThirdPartyEventRules` generally deal with controlling which modifications can happen to a room. Public rooms fits within that idea, even if its toggle state isn't controlled through a state event.
|
|
|
|
|
|
|
|
| |
There's no need for it to be in the dict as well as the events table. Instead,
we store it in a separate attribute in the EventInternalMetadata object, and
populate that on load.
This means that we can rely on it being correctly populated for any event which
has been persited to the database.
|
|
|
|
| |
This fixes a bug where `m.ignored_user_list` was assumed to be a dict,
leading to odd behavior for users who set it to something else.
|
| |
|
|
|
| |
This allows for connecting to certain IdPs, e.g. GitLab.
|
| |
|
|
|
| |
The idea is that in future tokens will encode a mapping of instance to position. However, we don't want to include the full instance name in the string representation, so instead we'll have a mapping between instance name and an immutable integer ID in the DB that we can use instead. We'll then do the lookup when we serialize/deserialize the token (we could alternatively pass around an `Instance` type that includes both the name and ID, but that turns out to be a lot more invasive).
|
| |
|
|\
| |
| | |
Report metrics on expensive rooms for state res
|
| | |
|
|/ |
|
|
|
| |
For some reason, an apparently unrelated PR upset mypy about this module. Here are a number of little fixes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove `on_timeout_cancel` from `timeout_deferred`
The `on_timeout_cancel` param to `timeout_deferred` wasn't always called on a
timeout (in particular if the canceller raised an exception), so it was
unreliable. It was also only used in one place, and to be honest it's easier to
do what it does a different way.
* Fix handling of connection timeouts in outgoing http requests
Turns out that if we get a timeout during connection, then a different
exception is raised, which wasn't always handled correctly.
To fix it, catch the exception in SimpleHttpClient and turn it into a
RequestTimedOutError (which is already a documented exception).
Also add a description to RequestTimedOutError so that we can see which stage
it failed at.
* Fix incorrect handling of timeouts reading federation responses
This was trapping the wrong sort of TimeoutError, so was never being hit.
The effect was relatively minor, but we should fix this so that it does the
expected thing.
* Fix inconsistent handling of `timeout` param between methods
`get_json`, `put_json` and `delete_json` were applying a different timeout to
the response body to `post_json`; bring them in line and test.
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Erik Johnston <erik@matrix.org>
|
| |
|
|
|
|
|
|
|
| |
Co-authored-by: Benjamin Koch <bbbsnowball@gmail.com>
This adds configuration flags that will match a user to pre-existing users
when logging in via OpenID Connect. This is useful when switching to
an existing SSO system.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea is to remove some of the places we pass around `int`, where it can represent one of two things:
1. the position of an event in the stream; or
2. a token that partitions the stream, used as part of the stream tokens.
The valid operations are then:
1. did a position happen before or after a token;
2. get all events that happened before or after a token; and
3. get all events between two tokens.
(Note that we don't want to allow other operations as we want to change the tokens to be vector clocks rather than simple ints)
|
|
|
| |
this makes it possible to use from the manhole, and seems cleaner anyway.
|
|
|
|
|
|
|
|
|
| |
* Create a new function to verify that the length of a device name is
under a certain threshold.
* Refactor old code and tests to use said function.
* Verify device name length during registration of device
* Add a test for the above
Signed-off-by: Dionysis Grigoropoulos <dgrig@erethon.com>
|
| |
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.20.0rc5 (2020-09-18)
==============================
In addition to the below, Synapse 1.20.0rc5 also includes the bug fix that was included in 1.19.3.
Features
--------
- Add flags to the `/versions` endpoint for whether new rooms default to using E2EE. ([\#8343](https://github.com/matrix-org/synapse/issues/8343))
Bugfixes
--------
- Fix rate limiting of federation `/send` requests. ([\#8342](https://github.com/matrix-org/synapse/issues/8342))
- Fix a longstanding bug where back pagination over federation could get stuck if it failed to handle a received event. ([\#8349](https://github.com/matrix-org/synapse/issues/8349))
Internal Changes
----------------
- Blacklist [MSC2753](https://github.com/matrix-org/matrix-doc/pull/2753) SyTests until it is implemented. ([\#8285](https://github.com/matrix-org/synapse/issues/8285))
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of just using the most recent extremities let's pick the
ones that will give us results that the pagination request cares about,
i.e. pick extremities only if they have a smaller depth than the
pagination token.
This is useful when we fail to backfill an extremity, as we no longer
get stuck requesting that same extremity repeatedly.
|
| |
| |
| |
| |
| |
| |
| | |
This converts calls like super(Foo, self) -> super().
Generated with:
sed -i "" -Ee 's/super\([^\(]+\)/super()/g' **/*.py
|
| | |
|
| |
| |
| |
| |
| | |
slots use less memory (and attribute access is faster) while slightly
limiting the flexibility of the class attributes. This focuses on objects
which are instantiated "often" and for short periods of time.
|
| | |
|
| |
| |
| |
| |
| |
| | |
This is *not* ready for production yet. Caveats:
1. We should write some tests...
2. The stream token that we use for events can get stalled at the minimum position of all writers. This means that new events may not be processed and e.g. sent down sync streams if a writer isn't writing or is slow.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The idea here is that we pass the `max_stream_id` to everything, and only use the stream ID of the particular event to figure out *when* the max stream position has caught up to the event and we can notify people about it.
This is to maintain the distinction between the position of an item in the stream (i.e. event A has stream ID 513) and a token that can be used to partition the stream (i.e. give me all events after stream ID 352). This distinction becomes important when the tokens are more complicated than a single number, which they will be once we start tracking the position of multiple writers in the tokens.
The valid operations here are:
1. Is a position before or after a token
2. Fetching all events between two tokens
3. Merging multiple tokens to get the "max", i.e. `C = max(A, B)` means that for all positions P where P is before A *or* before B, then P is before C.
Future PR will change the token type to a dedicated type.
|
| |
| |
| |
| |
| | |
Removes the `user_joined_room` and stops calling it since there are no observers.
Also cleans-up some other unused signals and related code.
|
| |
| |
| |
| |
| | |
`pusher_pool.on_new_notifications` expected a min and max stream ID, however that was not what we were passing in. Instead, let's just pass it the current max stream ID and have it track the last stream ID it got passed.
I believe that it mostly worked as we called the function for every event. However, it would break for events that got persisted out of order, i.e, that were persisted but the max stream ID wasn't incremented as not all preceding events had finished persisting, and push for that event would be delayed until another event got pushed to the effected users.
|
| |
| |
| |
| | |
This reverts commit e7fd336a53a4ca489cdafc389b494d5477019dc0.
|
| | |
|
| | |
|
| |
| |
| | |
The intention here is to change `StreamToken.room_key` to be a `RoomStreamToken` in a future PR, but that is a big enough change without this refactoring too.
|
|/
|
| |
This removes `SourcePaginationConfig` and `get_pagination_rows`. The reasoning behind this is that these generic classes/functions erased the types of the IDs it used (i.e. instead of passing around `StreamToken` it'd pass in e.g. `token.room_key`, which don't have uniform types).
|
| |
|
|
|
|
|
|
|
| |
* Revert "Add experimental support for sharding event persister. (#8170)"
This reverts commit 82c1ee1c22a87b9e6e3179947014b0f11c0a1ac3.
* Changelog
|
| |
|
|
|
| |
This requires adding a mypy plugin to fiddle with the type signatures a bit.
|
| |
|
| |
|
|
|
|
|
|
| |
This is *not* ready for production yet. Caveats:
1. We should write some tests...
2. The stream token that we use for events can get stalled at the minimum position of all writers. This means that new events may not be processed and e.g. sent down sync streams if a writer isn't writing or is slow.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Move `get_devices_with_keys_by_user` to `EndToEndKeyWorkerStore`
this seems a better fit for it.
This commit simply moves the existing code: no other changes at all.
* Rename `get_devices_with_keys_by_user`
to better reflect what it does.
* get_device_stream_token abstract method
To avoid referencing fields which are declared in the derived classes, make
`get_device_stream_token` abstract, and define that in the classes which define
`_device_list_id_gen`.
|
|
|
|
|
|
|
|
|
|
|
| |
... and to show that it does something slightly different to
`_get_e2e_device_keys_txn`.
`include_all_devices` and `include_deleted_devices` were never used (and
`include_deleted_devices` was broken, since that would cause `None`s in the
result which were not handled in the loop below.
Add some typing too.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
This is split out from https://github.com/matrix-org/synapse/pull/7438, which had gotten rather large.
`LoginRestServlet` has a couple helper methods, `login_submission_legacy_convert` and `login_id_thirdparty_from_phone`. They're primarily used for converting legacy user login submissions to "identifier" dicts ([see spec](https://matrix.org/docs/spec/client_server/r0.6.1#post-matrix-client-r0-login)). Identifying information such as usernames or 3PID information used to be top-level in the login body. They're now supposed to be put inside an [identifier](https://matrix.org/docs/spec/client_server/r0.6.1#identifier-types) parameter instead.
#7438's purpose is to allow using the new identifier parameter during User-Interactive Authentication, which is currently handled in AuthHandler. That's why I've moved these helper methods there. I also moved the refactoring of these method from #7438 as they're relevant.
|
| |
|
| |
|
|
|
|
|
|
| |
Small cleanup PR.
* Removed the unused `is_guest` argument
* Added a safeguard to a (currently) impossible code path, fixing static checking at the same time.
|
| |
|
| |
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.19.1rc1 (2020-08-25)
==============================
Bugfixes
--------
- Fix a bug introduced in v1.19.0 where appservices with ratelimiting disabled would still be ratelimited when joining rooms. ([\#8139](https://github.com/matrix-org/synapse/issues/8139))
- Fix a bug introduced in v1.19.0 that would cause e.g. profile updates to fail due to incorrect application of rate limits on join requests. ([\#8153](https://github.com/matrix-org/synapse/issues/8153))
|
| | |
|
| |
| |
| |
| |
| |
| | |
Add new method ratelimiter.can_requester_do_action and ensure that appservices are exempt from being ratelimited.
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Erik Johnston <erik@matrix.org>
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
Add new method ratelimiter.can_requester_do_action and ensure that appservices are exempt from being ratelimited.
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Erik Johnston <erik@matrix.org>
|
| | |
|
| | |
|
| |
| |
| |
| | |
guests. (#8135)
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| | |
Fixes https://github.com/matrix-org/synapse/issues/6583
|
| | |
|
| | |
|
| | |
|
|/
|
|
|
|
|
| |
If we got an error persisting an event, we would try to remove the push actions
asynchronously, which would lead to a 'Re-starting finished log context'
warning.
I don't think there's any need for this to be asynchronous.
|
| |
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Hopefully this mostly speaks for itself. I also did a bit of cleaning up of the
error handling.
Fixes #8047
|
|/
|
|
|
|
|
|
|
|
| |
Duplicating function signatures between server.py and server.pyi is
silly. This commit changes that by changing all `build_*` methods to
`get_*` methods and changing the `_make_dependency_method` to work work
as a descriptor that caches the produced value.
There are some changes in other files that were made to fix the typing
in server.py.
|
|
|
|
|
|
| |
I think this would have caught all the cases in
https://github.com/matrix-org/synapse/issues/7642 - and I think a 500 makes
more sense here than a 403
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
We've [decided](https://github.com/matrix-org/synapse/issues/5253#issuecomment-665976308) to remove the signature check for v1 lookups.
The signature check has been removed in v2 lookups. v1 lookups are currently deprecated. As mentioned in the above linked issue, this verification was causing deployments for the vector.im and matrix.org IS deployments, and this change is the simplest solution, without being unjustified.
Implementations are encouraged to use the v2 lookup API as it has [increased privacy benefits](https://github.com/matrix-org/matrix-doc/pull/2134).
|
|
|
|
|
|
|
|
|
|
|
| |
`StatsHandler` handles updates to the `current_state_delta_stream`, and updates room stats such as the amount of state events, joined users, etc.
However, it counts every new join membership as a new user entering a room (and that user being in another room), whereas it's possible for a user's membership status to go from join -> join, for instance when they change their per-room profile information.
This PR adds a check for join->join membership transitions, and bails out early, as none of the further checks are necessary at that point.
Due to this bug, membership stats in many rooms have ended up being wildly larger than their true values. I am not sure if we also want to include a migration step which recalculates these statistics (possibly using the `_populate_stats_process_rooms` bg update).
Bug introduced in the initial implementation https://github.com/matrix-org/synapse/pull/4338.
|
| |
|
| |
|
|\
| |
| |
| | |
erikj/add_rate_limiting_to_joins
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Thanks to some slightly overzealous cleanup in the
`delete_old_current_state_events`, it's possible to end up with no
`event_forward_extremities` in a room where we have outstanding local
invites. The user would then get a "no create event in auth events" when trying
to reject the invite.
We can hack around it by using the dangling invite as the prev event.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
Fixes #7901.
Signed-off-by: Niklas Tittjung <nik_t.01@web.de>
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
Fixes #7774
|
| |
|
| |
|
|
|
|
|
|
| |
We shouldn't allow others to make_join through us if we've left the room;
reject such attempts with a 404.
Fixes #7835. Fixes #6958.
|
| |
|
|\
| |
| | |
Fix guest user registration with lots of client readers
|
| | |
|
| |
| |
| | |
I found these made pycharm have more of a clue as to what was going on in other places.
|
| |
| |
| |
| | |
json. (#7836)
|
|\ \
| |/
|/| |
|
| | |
|
| | |
|
|/ |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Delete Room admin API allows server admins to remove rooms from server
and block these rooms.
`DELETE /_synapse/admin/v1/rooms/<room_id>`
It is a combination and improvement of "[Shutdown room](https://github.com/matrix-org/synapse/blob/develop/docs/admin_api/shutdown_room.md)" and "[Purge room](https://github.com/matrix-org/synapse/blob/develop/docs/admin_api/purge_room.md)" API.
Fixes: #6425
It also fixes a bug in [synapse/storage/data_stores/main/room.py](synapse/storage/data_stores/main/room.py) in ` get_room_with_stats`.
It should return `None` if the room is unknown. But it returns an `IndexError`.
https://github.com/matrix-org/synapse/blob/901b1fa561e3cc661d78aa96d59802cf2078cb0d/synapse/storage/data_stores/main/room.py#L99-L105
Related to:
- #5575
- https://github.com/Awesome-Technologies/synapse-admin/issues/17
Signed-off-by: Dirk Klimpel dirk@klimpel.org
|
| |
|
| |
|
|
|
|
|
|
| |
The replication client requires that arguments are given as keyword
arguments, which was not done in this case. We also pull out the logic
so that we can catch and handle any exceptions raised, rather than
leaving them unhandled.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When fetching the state of a room over federation we receive the event
IDs of the state and auth chain. We then fetch those events that we
don't already have.
However, we used a function that recursively fetched any missing auth
events for the fetched events, which can lead to a lot of recursion if
the server is missing most of the auth chain. This work is entirely
pointless because would have queued up the missing events in the auth
chain to be fetched already.
Let's just diable the recursion, since it only gets called from one
place anyway.
|
| |
|
|
|
| |
It seems auth_events can be either a list or a tuple, depending on Things.
|
|
|
|
|
|
|
|
| |
Fixes #2181.
The basic premise is that, when we
fail to reject an invite via the remote server, we can generate our own
out-of-band leave event and persist it as an outlier, so that we have something
to send to the client.
|
|
|
| |
... instead of duplicating `config.signing_key[0]` everywhere
|
| |
|
| |
|
| |
|
|
|
| |
The CI appears to use the latest version of isort, which is a problem when isort gets a major version bump. Rather than try to pin the version, I've done the necessary to make isort5 happy with synapse.
|
|
|
| |
fixes #7016
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.16.0rc2 (2020-07-02)
==============================
Synapse 1.16.0rc2 includes the security fixes released with Synapse 1.15.2.
Please see [below](https://github.com/matrix-org/synapse/blob/master/CHANGES.md#synapse-1152-2020-07-02) for more details.
Improved Documentation
----------------------
- Update postgres image in example `docker-compose.yaml` to tag `12-alpine`. ([\#7696](https://github.com/matrix-org/synapse/issues/7696))
Internal Changes
----------------
- Add some metrics for inbound and outbound federation latencies: `synapse_federation_server_pdu_process_time` and `synapse_event_processing_lag_by_event`. ([\#7771](https://github.com/matrix-org/synapse/issues/7771))
|
| |\ |
|
| | |
| | |
| | |
| | | |
Introduced in #7755, not yet released.
|
|\ \ \
| | |/
| |/| |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
my editor was complaining about unset variables, so let's add some early
returns to fix that and reduce indentation/cognitive load.
|
| |/
|/|
| | |
fix a few things to make this pass mypy.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
State res v2 across large data sets can be very CPU intensive, and if
all the relevant events are in the cache the algorithm will run from
start to finish within a single reactor tick. This can result in
blocking the reactor tick for several seconds, which can have major
repercussions on other requests.
To fix this we simply add the occaisonal `sleep(0)` during iterations to
yield execution until the next reactor tick. The aim is to only do this
for large data sets so that we don't impact otherwise quick resolutions.=
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Implementation of https://github.com/matrix-org/matrix-doc/pull/2625
|
| |\ \ |
|
| |\ \ \ |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | |_|/
| |/| | |
|
| | | |
| | | |
| | | | |
The aim here is to make it easier to reason about when streams are limited and when they're not, by moving the logic into the database functions themselves. This should mean we can kill of `db_query_to_update_function` function.
|
| | | | |
|
| | | | |
|
| | | | |
|
| |_|/
|/| | |
|
| | | |
|
| |/
|/| |
|
|/
|
|
|
|
|
|
|
| |
Fixes https://github.com/matrix-org/synapse/issues/2431
Adds config option `encryption_enabled_by_default_for_room_type`, which determines whether encryption should be enabled with the default encryption algorithm in private or public rooms upon creation. Whether the room is private or public is decided based upon the room creation preset that is used.
Part of this PR is also pulling out all of the individual instances of `m.megolm.v1.aes-sha2` into a constant variable to eliminate typos ala https://github.com/matrix-org/synapse/pull/7637
Based on #7637
|
| |
|
| |
|
|
|
|
| |
Fixes https://github.com/matrix-org/synapse/issues/3177
|
| |
|
|
|
|
| |
active user limit has been reached (#7263)
|
|
|
|
|
|
|
|
|
|
| |
While working on https://github.com/matrix-org/synapse/issues/5665 I found myself digging into the `Ratelimiter` class and seeing that it was both:
* Rather undocumented, and
* causing a *lot* of config checks
This PR attempts to refactor and comment the `Ratelimiter` class, as well as encourage config file accesses to only be done at instantiation.
Best to be reviewed commit-by-commit.
|
|
|
|
| |
docs, default configs, comments. Nothing very significant.
|
|
|
|
|
| |
flow (#7625)
This is so the user is warned about the username not being valid as soon as possible, rather than only once they've finished UIA.
|
|
|
| |
We already caught some exceptions, but not all.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Expose `return_html_error`, and allow it to take a Jinja2 template instead of a raw string
* Clean up exception handling in SAML2ResponseResource
* use the existing code in `return_html_error` instead of re-implementing it
(giving it a jinja2 template rather than inventing a new form of template)
* do the exception-catching in the REST layer rather than in the handler
layer, to make sure we catch all exceptions.
|
|
|
| |
It looks like `user_device_resync` was ignoring cross-signing keys from the results received from the remote server. This patch fixes this, by processing these keys using the same process `_handle_signing_key_updates` does (and effectively factor that part out of that function).
|
| |
|
|
|
| |
Without this patch, if an error happens which isn't caught by `user_device_resync`, then `_maybe_retry_device_resync` would fail, without retrying the next users in the iteration. This patch fixes this so that it now only logs an error in this case.
|
|
|
|
| |
(#7599)
|
|
|
| |
Signed-off-by: Christopher Cooper <cooperc@ocf.berkeley.edu>
|
| |
|
|
|
| |
These are surprisingly expensive, and we only really need to do them at startup.
|
| |
|
|
|
|
|
|
|
| |
The idea here is that if an instance persists an event via the replication HTTP API it can return before we receive that event over replication, which can lead to races where code assumes that persisting an event immediately updates various caches (e.g. current state of the room).
Most of Synapse doesn't hit such races, so we don't do the waiting automagically, instead we do so where necessary to avoid unnecessary delays. We may decide to change our minds here if it turns out there are a lot of subtle races going on.
People probably want to look at this commit by commit.
|
|
|
|
| |
Mainly because sometimes the email push code raises exceptions where the
stack traces have gotten lost, which is hopefully fixed by this.
|
|
|
|
|
|
|
|
| |
Instead of doing a complicated dance of deleting and moving aliases one
by one, which sends a canonical alias update into the old room for each
one, lets do it all in one go.
This also changes the function to move *all* local alias events to the new
room, however that happens later on anyway.
|
|
|
|
| |
These are business as usual errors, rather than stuff we want to log at
error.
|
|
|
|
|
|
|
|
|
|
|
| |
When a call to `user_device_resync` fails, we don't currently mark the remote user's device list as out of sync, nor do we retry to sync it.
https://github.com/matrix-org/synapse/pull/6776 introduced some code infrastructure to mark device lists as stale/out of sync.
This commit uses that code infrastructure to mark device lists as out of sync if processing an incoming device list update makes the device handler realise that the device list is out of sync, but we can't resync right now.
It also adds a looping call to retry all failed resync every 30s. This shouldn't cause too much spam in the logs as this commit also removes the "Failed to handle device list update for..." warning logs when catching `NotRetryingDestination`.
Fixes #7418
|
|
|
|
| |
This now matches the logic of the registration process as modified in
56db0b1365965c02ff539193e26c333b7f70d101 / #7523.
|
|
|
|
|
|
|
|
|
| |
(#7497)
Per https://github.com/matrix-org/matrix-doc/issues/1436#issuecomment-410089470 they should be omitted instead of returning null or "". They aren't marked as required in the spec.
Fixes https://github.com/matrix-org/synapse/issues/7333
Signed-off-by: Aaron Raimist <aaron@raim.ist>
|
|\
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.13.0rc3 (2020-05-18)
Bugfixes:
- Hash passwords as early as possible during registration. #7523
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|\|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.13.0rc2 (2020-05-14)
==============================
Bugfixes
--------
- Fix a long-standing bug which could cause messages not to be sent over federation, when state events with state keys matching user IDs (such as custom user statuses) were received. ([\#7376](https://github.com/matrix-org/synapse/issues/7376))
- Restore compatibility with non-compliant clients during the user interactive authentication process, fixing a problem introduced in v1.13.0rc1. ([\#7483](https://github.com/matrix-org/synapse/issues/7483))
Internal Changes
----------------
- Fix linting errors in new version of Flake8. ([\#7470](https://github.com/matrix-org/synapse/issues/7470))
|
| |
| |
| |
| | |
This backs out some of the validation for the client dictionary and logs if
this changes during a user interactive authentication session instead.
|
| |
| |
| | |
This is safe as we can now write to cache invalidation stream on workers, and is required for when we move event persistence off master.
|
| | |
|
| | |
|
|\|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* release-v1.13.0:
Don't UPGRADE database rows
RST indenting
Put rollback instructions in upgrade notes
Fix changelog typo
Oh yeah, RST
Absolute URL it is then
Fix upgrade notes link
Provide summary of upgrade issues in changelog. Fix )
Move next version notes from changelog to upgrade notes
Changelog fixes
1.13.0rc1
Documentation on setting up redis (#7446)
Rework UI Auth session validation for registration (#7455)
Fix errors from malformed log line (#7454)
Drop support for redis.dbid (#7450)
|
| |
| |
| |
| | |
Be less strict about validation of UI authentication sessions during
registration to match client expecations.
|
| | |
|
|\| |
|
| |
| |
| | |
Add dummy_events_threshold which allows configuring the number of forward extremities a room needs for Synapse to send forward extremities in it.
|
| | |
|
|\| |
|
| |\ |
|
| | |
| | |
| | |
| | |
| | | |
Currently we copy `users_who_share_room` needlessly about three times,
which is expensive when the set is large (which it can easily be).
|
| |/
|/| |
|
|/ |
|
|
|
|
|
| |
By persisting the user interactive authentication sessions to the database, this fixes
situations where a user hits different works throughout their auth session and also
allows sessions to persist through restarts of Synapse.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Long story short: if we're handling presence on the current worker, we shouldn't be sending USER_SYNC commands over replication.
In an attempt to figure out what is going on here, I ended up refactoring some bits of the presencehandler code, so the first 4 commits here are non-functional refactors to move this code slightly closer to sanity. (There's still plenty to do here :/). Suggest reviewing individual commits.
Fixes (I hope) #7257.
|
|\ |
|
| | |
|
| | |
|
|\| |
|
| |
| |
| |
| |
| |
| | |
This was incorrectly merged to the release branch before it was ready.
This reverts commit 72fe2affb6ac86d433b80b6452da57052365aa26.
|
|\| |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add changelog
Save retrieved keys to the db
lint
Fix and de-brittle remote result dict processing
Use query_user_devices instead, assume only master, self_signing key types
Make changelog more useful
Remove very specific exception handling
Wrap get_verify_key_from_cross_signing_key in a try/except
Note that _get_e2e_cross_signing_verify_key can raise a SynapseError
lint
Add comment explaining why this is useful
Only fetch master and self_signing key types
Fix log statements, docstrings
Remove extraneous items from remote query try/except
lint
Factor key retrieval out into a separate function
Send device updates, modeled after SigningKeyEduUpdater._handle_signing_key_updates
Update method docstring
|
| | |
|
| |
| |
| |
| | |
(#7268)
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
room directory. (#7260)
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
public rooms list (#6899)
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
make sure we clear out all but one update for the user
|
| | |
|
|\ \
| | |
| | |
| | |
| | | |
matrix-org/dbkr/always_send_own_device_list_updates
Always send the user updates to their own device list
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
This will allow clients to notify users about new devices even if
the user isn't in any rooms (yet).
|